I could be fabulously wealthy right now. When I first wrote about Bitcoin, the price was $14.65 USD, and as of writing, the price is hovering just above $12,500 after undergoing a few ups and downs since December. If I had bought 10 BTC that day for a $146.50 USD investment, those 10 Bitcoins would be worth over $125,000.

That’s quite a large “what if”.

I am what is known by Bitcoin proponents as a “nocoiner”, a person who knew about BTC early on, but did not invest. However, I sleep quite well at night as I am certain that even if I had invested then, I would have sold coins in many of the various stages in which the price has been going up. But most importantly, many of the doubts that I had back then still remain, with various new problems arising.

So what are the issues? (for an earlier look at some regulatory problems, see this article).

Failure as a method of payment

One of the earliest promises of Bitcoin was that it would be an unparalleled decentralised, open source currency where transactions would be fast, cheap and transparent as they would be lodged in a distributed cryptographic immutable public ledger called the blockchain. The currency is not issued by central body, but rather mined by people around the world who dedicate computing power to perform transaction verifications and are rewarded for their efforts.

Reality has been very different. While Bitcoin has been in existence since 2009, the number of places which accept it as a means of payment has remained limited, and on the contrary, some corporate adopters have stopped taking it altogether, such as Dell and Steam.

There are various reasons for the lack of success as a payment method. One of the main problems is price instability, just in the last couple of months the price of Bitcoin has gone on a roller-coaster ride, jumping from $7,000 to $19,000 in one month, and then down to $9,000 and then up to about $12,000. This volatility makes BTC particularly unsuited for merchants, as they could risk a wild variation in price in a short period of time that could dissipate profits. During the latest crash, the price dropped 12% in three hours, a variation that is unacceptable and is unsuited for a stable currency.

Many people were willing to ignore these problems as long as the price went up, but this creates another problem that has been identified with Bitcoin, deflation. When prices go up, we get what I call the “Bitcoin pizza” problem. In 2010 a software developer called Laszlo Hanyecz paid for a pizza with 10,000 BTC; as of today, that pizza is worth over $120 million USD, and it has its own Twitter account. The problem with a deflationary currency, and particularly one that has gained so much value in the last years, is that nobody wants to spend it and hoarding becomes an issue, this has become its own Internet meme, and it’s known as “hodling“. Nobody wants to spend their bitcoins, so they become useless as payment method.

But the main problem in recent months with using Bitcoin as a currency is that transaction times and fees have become prohibitive, the average transaction confirmation time as of today being 263 minutes, and the average transaction fee at an astounding $25 USD average, and it got to over $50 during December 2017. There are various reasons for this, including the size of each mined block, the reward given to miners, and the limit in the number of transactions per second to ensure decentralisation. The issue is therefore systemic, and one could argue that the more successful Bitcoin gets, the more useless it becomes as a currency.

Several solutions have been proposed for this, including changing the code itself (a hard fork), and changing elements of the ecosystem to bypass Bitcoin’s scaling limitations (soft fork). A few hard forks have been made, such as Bitcoin Cash, but this is a controversial solution that still splits the community. The preferred solutions right now are Segregated Witness (SegWit), and the Lightning Network (LN), but these have been criticised for various reasons. In particular, LN seems to be a highly problematic implementation that necessitates the existence of funds and mandates lending, which could actually complicate the environment even further.

Store of value

As a response to Bitcoin’s difficulties as a currency, proponents have been advocating it as something entirely different, it is actually a store of value, digital gold instead of digital dollars. They say that Bitcoin’s viability is not as a payment method, but it rather rests in its superior technology, on its use of the blockchain, and on its scarcity (there is a limited amount of BTC). So instead of buying gold or making other investments to store your hard-earned fiat currency, you should convert it into Bitcoin.

There are of course various problems with this. Firstly, there is an assumption that prices will continue to go up, but this rests on several assumptions. Just because the price has gone up in the past, it doesn’t mean that it will continue to go up in the future. On the contrary, as BTC’s value went up, the various scalability issues came to the forefront, and its unique inability as a payment method dissuaded potential investors, so with no new money flowing in, the price dropped. Those who bought BTC at $19,000 because they thought that the price could only go up were left holding the bag.

Moreover, the lack of transparency in the ecosystem still bothers me, just as it did from the start. Satoshi Nakamoto‘s identity remains shrouded in mystery, and there are still large amount of coins held by unidentified people. Moreover, over the years there have been lots of heists, robberies and hacking attacks against Bitcoin users, leaving large numbers of coins in the hands of real criminals. Similarly, since its inception it was used by criminals in the Dark Web as a method of payment for drugs and other unsavoury practices. This means that large amounts of BTC are held by shady characters and criminals, as well as anonymous people who do not respond to anyone. Call me crazy, but investing in a currency that is controlled by so many obscure interests does not fill me with confidence of it being a sound store of value.

Unregulated intermediaries

While Bitcoin and other cryptocurrencies are entirely decentralised on paper, their success depends in large part on the existence of exchanges and other intermediaries. As mining has become prohibitive for everyone but dedicated conglomerates with large amount of computing power, the only way to obtain Bitcoin is to receive it for payment, or to purchase it with fiat currency or with other cryptocurrencies. As BTC is no longer viable as a currency, this usually means that to get your hands on some Bitcoin you need an exchange that can perform the transaction. There are other intermediaries that perform other functions, such as wallets to hold BTC, derivatives, and even lending. A lot of these activities are heavily regulated in “regular” financial markets for a reason, intermediaries of these nature are handling funds and investments in ways that require quite a large amount of trust and transparency.

Exchanges are the chink in the armour of Bitcoin’s decentralisation, and throughout its short history there have been a large numbers of amateurish and even fraudulent intermediaries (see Mt.Gox). The reason for this is that for a while exchanges operated completely free of regulation, which is actually something that doesn’t seem to bother some Bitcoin enthusiasts. As more regulators started paying attention to exchanges, an interesting split seems to have occurred in the community. On the one hand we have legitimate regulated intermediaries that seem to be operating following the letter of the law, and on the other hand we have a number of dodgy and obscure exchanges that appear to be operating outright scams and ponzi schemes in areas where regulators have been reluctant to intervene. The latest horror story is Bitconnect, an exchange and lending operation that is running both a cryptocurrency (BCC), and a blatant ponzi scheme that appears to have left investors in the cold with large losses. There had been warnings from many in the community that Bitconnect was not operating properly, but this did not stop investors pouring money into the exchange, until it shut down earlier this month.

This is a big problem for the cryptocurrency environment right now. Regulation is anathema to many of the libertarian and anarchic enthusiasts who see Bitcoin as the perfect response to what they see is a corrupt union of governments and traditional financial institutions. Regulation is therefore feared, but it is precisely that same regulation that is supposed to be in place to stop scammers and fraudsters taking advantage of ill informed investors. Moreover, any photogenic millennial with a couple of months of trading experience can go on Youtube to provide investment strategy, and sometimes outright shilling in favour of fraudulent businesses such as Bitconnect. In the era of the demise of expertise, memes and “influnecers” rule.

The latest questionable practice is Tether (USDT), which is Bitcoin’s attempt to peg crytptocurrencies to some fiat value while bypassing strict national regulation. Tether is a cryptocurrency token that is allegedly supported by US Dollars, in other words, for each Tether in existence, the issuers claim that there is a USD supporting its value, so it’s nominally pegged to the USD. However, the company has obscure origins, and for a while its operations were kept secret, which did not stop many exchanges accepting it. Thanks to the Panama Papers we have learned that Tether was created in the Virgin Islands (which should already raise some eyebrows) by the operators of Bitfinex, one of the largest BTC exchanges. Further research unveiled that Tethers are supposedly held in a tiny Polish bank, further providing doubts about the operation. And to put the icing on the cake, during the current crash, as the price dropped to $9,000 USD Tether’s operators started printing tokens to an alarming rate, up to $650 million in the last week alone, bringing the total amount of Tethers to $2 Billion USD. Needless to say, there are worries that Tethers are being used to artificially maintain the price of Bitcoin during a downturn, as the printed value created out of nothing is being used to buy cryptocurrencies.

If Tether and Bitfinex are eventually the subject of regulatory oversight, this could lead to a huge crash for Bitcoin, so stay tuned (and follow Bitfinex’ed on Twitter for updates).

What is clear is that it should be suspicious that any hint of regulation tends to send the price of Bitcoin tumbling as some exchanges could be running price manipulation in an unregulated market, such as painting the tape and wash trading. Moreover, researchers have found that it is possible that a single actor was able to manipulate the price of Bitcoin from $150 to $1000 in the time of Mt.Gox.

Security and replicability, old concerns

As Bitcoin’s value went up, so did the potential for hackers to try to steal them. While Bitcoin itself is protected by strong cryptography, users are vulnerable to attacks that try to steal their coins. Hackers have been successfully targeting exchanges exchanges and users, managing to steal hundreds of thousands of BTCs; a list of stolen coins over the years contains over 1.8 million BTC have been stolen in major incidents, and this does not include everyday attacks. Strong encryption does not protect against fraudsters and scam artists. The security issues with Bitcoin are hard to assess, but there are various security issues with very high risk, such as general security, subversive miner strategies, loss of keys and man-in-the-middle attacks.

The other problem is that when your coins are gone, they are gone for good (I should know, I still have 0.01 BTC in a broken hard drive). If any coins are stolen, the community will either blame the victim, or say “sorry for your loss”. The victim blaming is an interesting phenomenon. When someone complains that they were hacked or their coins stolen, people in the community will often criticise the security measures of the hacked person, and it seems like to be able to operate in the Bitcoin environment, one needs security skills that rival those of a bank. I have always seen this as a huge obstacle for adoption.

Growing complexity

Related to the last point, the complicated nature of Bitcoin and the cryptocurrency environment presents problems for mainstream interest. During the years, I have tried to explain cryptocurrencies to people who know that I am interested in Bitcoin, and I often lose their interest; “too complicated” is a common reaction. The problem is that even Bitcoin proponents admit that there is a large learning curve to understand the technology properly, and understand it they must, otherwise they will get hacked. So we have an interesting paradox, Bitcoin enthusiasts ridicule the mainstream for not understanding BTC, but at the same time yearn for wider adoption that will bring the price up, justifying their early adopted status. Needless to say, both practices are incompatible.

Things are made more complicated by the proliferation of cryptocurrencies and Initial Coin Offerings (ICOs), producing a soup of acronyms that can confuse even those interested in the space. And if you don’t know your ETH from your LTC, or your BTC from your BCH; if you can’t identify what a SegWit is (which you should totally be using), then you are laughed out of the community.

The result is a decreasing number of techies and geeks that can make the price go up. And this is all that matter to some.

Environmental cost

The computational power dedicated to mining has continued to increase over time. In Bitcoin, computing power is called the hash rate, and the unit of measure is the hash/second, meaning a calculation per second. Ten tera hashes per second (Thash/s) means that the network is performing 10 trillion calculations per second, with the hash rate at the time of writing standing at over 19 million Thash/s. Whichever way you measure it, that is an astounding amount of computing power used to produce value, which could have a large impact on the environment. Researchers found that the entire Bitcoin network uses energy that exceeds the use of 159 countries. Even under normal circumstances, such a staggering amount of energy expenditure might prompt questions about Bitcoin’s carbon footprint and other related environmental problems.

Breaking the Blockchain?

While Bitcoin may have a lot of issues, a lot of people have decided to back its underlying database technology, the blockchain. Even if Bitcoin tanks, the blockchain will remain.

The blockchain is a decentralised, distributed, cryptographic public ledger. This sounds very impressive, and proposals have been made to implement a blockchain in everything from music licensing to bananas. While I have been considerably more enthusiastic about the blockchain’s potential than cryptocurrencies, this initial enthusiasm has waned in recent years. The main problem is that for all its promises, blockchains are difficult to implement, and could prove to be less efficient and more cumbersome than existing solutions.

While blockchain hype has been increasing, some scepticism started seeping in. Many projects that started out as blockchain ended up implementing different technologies, this is because institutions thinking of developing a blockchain face time constraints, barriers to adoption, and sheer complexity. More interestingly, of 26,000 blockchain projects listed in the open source repository GitHub in 2016, only 8% survive to this day.

Perhaps the most scathing and interesting attack against blockchain hype has come from Kai Stinchcombe, who made a lot of waves by pointing out that in ten years the practical uses for the blockchain have been minimal, or even non-existent. While I disagree with the categorical statement, he does a good job of dissecting various case studies in favour of the blockchain, and finds them wanting.

Another fantastic critic of blockchain hype and Bitcoin in general is David Gerard, with his awesome book “Attack of the 50 Foot Blockchain“.

Concluding

I often dread writing about Bitcoin because the topic tends to attract people who are completely in favour of the cryptocurrency, and sometimes these do not take criticism lightly. Anyone who is a BTC sceptic is quickly labelled a paid shill by Wall Street, a FUD merchant, an uninformed person who doesn’t understand the amazing technology, a Statist, a nocoiner loser, a bitter person who sold their BTC too early, or a combination of the above.

These are my honest opinions as someone who is mildly adept at the technology and who has been following Bitcoin from early on, I have no other motive than my endless pursuit of writing things that interest me. It’s possible that I will be wrong and Bitcoin will continue its unstoppable trip towards world domination. I doubt it, but I’m not bothered either way.

However, it is precisely this type of religious reaction from the community that often makes many of us highly sceptical. In a fantastic Twitter rant against Bitcoin, the always excellent Sarah Jeong said:

“I am the target demographic for blockchain based solutions. I am the paranoid 1% who purposefully inconveniences her life for decentralization and cryptographic solutions. I am the rare case and I fucking hate bitcoin”

I couldn’t put it better myself.


Source: Technollama

Digital colonialism describes the domination of Western companies in the provision of digital services in developing countries. These tend to be overwhelmingly US-based, and can be found in messaging, social media, search, music, storage, hosting, and domain names. While other names have been used to describe the phenomenon, the term digital colonialism dates mostly from the Net Mundial initiative, a series of meetings and events organised by the Brazilian government to shine a light on digital inequality in the global south.

Western digital dominance is easy to see at all levels. Google, Facebook, Whatsapp, Snapchat, Uber, Airbnb, all provide services that are not only largely based in the US, but also tend to follow a very specific mentality centred around Silicon Valley. The values of a small area in California are exported around the world.

The reason for this dominance has various explanations. The Internet itself started as a US military research network, so US-based services and developers had a starting advantage. For large period of time, Internet governance relied on US-centric ICANN (which has since undergone internationalisation efforts). Furthermore, early venture capitalists invested mostly in US companies, and this dominance carried forward. Network theory teaches that early advantages are often difficult to overcome, and the network favours winner-takes-all from an architectural perspective. Furthermore, the US was able to convert this early advantage in expertise and funding into large corporations. Finally, potential competitors have been more inward looking, and not intent on global dominance. China has developed hugely successful companies like JD, Tencent, Baidu, and Alibaba which rival US counterparts in size, but these are mostly directed towards the internal market. The same happens with other successful companies such as Flipkart (India), B2W (Latin America), and Odigeo (Europe).

The result is a US-centric Internet from the perspective of infrastructure and content. From the infrastructure level, the largest hosting, domain name, storage and content delivery networks are US companies. In content, Google and Facebook stand alone in their dominance of what people see and read around the world.

The problem is that the content dominance becomes a self-fulfilling prophecy, as these companies use their already strong dominant position to maintain the market dominance in what is often called the “rich-get-richer” effect. Newer content providers in developing countries are competing with companies that have considerable resources, infrastructure, and consumer recognition.

And now the tech giants use the dominance in other ways that further their interests. Facebook has a program called Internet.org which, in partnership with other tech giants, is supposed to give free or cheap Internet access to people in developing countries. This is done through the Free Basics program, which gives free access to a few selected mobile apps. While the idea of giving free Internet access is good, the reality is that this program cements Facebook’s content dominance, so people tend to equate Facebook with the Internet. It also hinders local apps from competing with the selected Free Basic apps.

Google has a program called Project Loon, which will attempt to give online connection to people in remote rural areas through balloons. While laudable, it is possible that this will still be used to maintain Google’s market dominance, as well as being able to collect useful data from users.

What can we do to stop this digital colonialism? Last October I attended a session at Mozfest organised by the always amazing Renata Avila. People from around the world talked about their experiences with digital colonialism, while discussing potential solutions.

Some participants suggested that governments in developing countries have to get more involved in trying to encourage local solutions to stop the reliance on Silicon Valley companies. However, a good number of participants were very suspicious of governmental solutions, and seemed to favour grass-roots, bottom-up, decentralised approaches. And then there was even one tech-bro from Silicon Valley who suggested that “we all have money to fly all the way to London to come to this meeting, we should fund something ourselves”. Needless to say, I had to point out that I had paid a £20 train ticket from Brighton, but I digress.

I think that both the Statist and Libertarian solutions are missing the point. We should indeed be suspicious of governments, which may want to impose their own political agendas. But we should also be suspicious of the mentality that thinks that we can all get along in a cyber-utopia ruled by benign venture capitalists and funded by bitcoins. Breaking the Silicon Valley dominance cannot be done just by creating a new app, resources are important. A possible way forward could be to rely on government funds to kick-start projects, but we do need decentralised solutions that are scalable.

In the end we need to recognise that tech giants are dominant because users like these services, and changing customer perception is going to be difficult. A lot of activists seem to parrot Brecht’s “The Solution“, the people are wrong, so we must find a new people.

Any real solution needs to win back the people, not dissolve them.


Source: Technollama

I have just listened to the latest episode of the excellent podcast “This American Life”, which dealt with the story of a monkey that took a selfie in the jungles of Indonesia, and David Slater, the photographer who made the portrait famous around the world.

The podcast recounts the story of how Slater travelled to Indonesia in 2011, and how he followed a troop of monkeys trying to get a few unique pictures. This part of the story is quite important from a copyright perspective, and it is interesting to hear more details from Slater’s own retelling of the events. Slater claims that he was specifically looking for a very close shot of a monkey’s face using a wide-angle lens, but the monkeys were obviously shy and not allowing him to get too close. While he managed to take a few pictures, none were the shot he was looking for. He claims that he placed the camera on a tripod as the monkeys were curious about the equipment, but the first batch of pictures were not good enough. He claims that he then changed the camera settings again, and one monkey in particular was drawn by the reflection on the lens. The monkey then went on to take a few pictures.

Slater claims that one in particular was astounding, a once-in-a-lifetime transcendental shot, where the monkey’s expression is one of pure joy and self-awareness. He imagined it on the front of National Geographic, so he sent that and a few other pictures to his agent, who then circulated it to a few news sources, and eventually it was first picked up and published by the Daily Mail as a feature story.

The rest is history.

The podcast goes through two interesting events from a legal perspective, the publication of the photograph in Wikipedia as a work in the public domain (because monkeys cannot own copyright), and the lawsuit brought by PETA. I have dealt with those two events in detail in blog posts and one article, but it is worth mentioning that the case was eventually settled out of court without making a decision about the copyright of the photograph itself. While the terms of the settlement are not known, both parties have revealed that Slater agreed to donate a percentage of his royalties to the monkey refuge where Naruto the monkey lives.

I am not so interested about the PETA case any more, but the podcast makes a very interesting comment, almost in a throwaway fashion. The reporter mentions that Slater is thinking of suing Wikipedia for copyright infringement.

Here we go again.

My guess is that Slater is planning to sue Wikipedia in the UK. Many commentators on the PETA case rightly dealt with US copyright law, as the case took place in California, and the agreement across the pond seems to be overwhelmingly against the picture having copyright. The jurisdiction aspect has always fascinated me, it seems like PETA clearly made a decision to sue in a US court when it could have easily sued in the UK. But if Slater sues in the UK, then my contention is that he has a very strong case in claiming that he owns the picture and that copyright subsists on the image. I have made some of these arguments before, but I will rely specifically on two photograph cases to make my argument.

The first case is that of Painer v Standard Verlags GmbH (C‑145/10), which involves Austrian photographer Eva-Maria Painer against several German-language newspapers. Ms Painer is a professional photographer, and she took a portrait of teenager Natascha Kampusch (pictured), who later became famous for being kidnapped and held for 8 years in a basement, and she later escaped her captor. At the time of her kidnapping, the only available picture of Ms Kampusch was the photograph taken by Ms Painer. Several newspapers used a computer version of the portrait to illustrate their stories of Ms Kampusch’s escape, and Ms Painer sued for copyright infringement in 2007 for such unauthorised use. The defendant alleged, amongst other things, that the portrait did not have copyright as it was not original enough because it was just a realistic picture, with little room for originality. The question was referred to the CJEU, which used the prevailing law and case law declaring that photographs are original if they are the author’s own intellectual creation reflecting his or her personality. But the Court of Justice went further, and elucidated what is an original photograph worthy of protection:

“90. As regards a portrait photograph, the photographer can make free and creative choices in several ways and at various points in its production.
91. In the preparation phase, the photographer can choose the background, the subject’s pose and the lighting. When taking a portrait photograph, he can choose the framing, the angle of view and the atmosphere created. Finally, when selecting the snapshot, the photographer may choose from a variety of developing techniques the one he wishes to adopt or, where appropriate, use computer software.
92. By making those various choices, the author of a portrait photograph can stamp the work created with his ‘personal touch’.
93. Consequently, as regards a portrait photograph, the freedom available to the author to exercise his creative abilities will not necessarily be minor or even non-existent.”

This is extremely relevant for the current case. While Painer deals with portrait pictures, the court is very clear in listing the various actions that warrant originality, including choosing angle, lenses, and even developing the photograph. It is also important to note that nowhere in this definition, and in fact nowhere in any European case law or legislation (as far as I know), does the law require that the button is pressed by the photographer, the acts preceding and following the taking of the photograph seem to be more important in establishing whether the photograph is the author’s own intellectual creation. A similar strong indication of Slater being able to claim ownership of the picture comes from the English case Temple Island Collections Ltd v New English Teas [2012] EWPCC 1. The case involves an iconic black and white picture of the Houses of Parliament with a red bus crossing Westminster Bridge, the photograph is owned by a company that produces and sells London souvenirs; the picture became famous and it has been licensed to other companies. The defendants are a tea company that wanted to use the picture in its tins, and when the negotiations to get a licence for the use of the image broke down, they went ahead and produced a different version of the Temple Island picture with a different angle and setting, but keeping the mono-colour background and the red bus.

While the case rested mostly on whether a substantial part of the Temple Island image had been copied, the defendants argued at some point that the copied picture did not have copyright as it was not an original work. Here Briss QC relies heavily in Painer and other CJEU cases, and makes the following comment about the subsistence of copyright in the image:

“A photograph of an object found in nature or for that matter a building, which although not natural is something found by the creator and not created by him, can have the character of an artistic work in terms of copyright law if the task of taking the photograph leaves ample room for an individual arrangement. What is decisive are the arrangements (motif, visual angle, illumination, etc.) selected by the photographer himself or herself.”

So far, so similar to Painer; as long as the author has made decisions about the arrangement of the photograph, it should have copyright. But most importantly is the discussion about how photography represents usually a problem to copyright law, as “the mere taking of a photograph is a mechanical process involving no skill at all and the labour of merely pressing a button.” Something else is needed to convey originality than the mere act of pressing a button. Briss QC lists these following elements as acts that can convey originality in a photograph, so there is originality:

“i) Residing in specialities of angle of shot, light and shade, exposure and effects achieved with filters, developing techniques and so on;
ii) Residing in the creation of the scene to be photographed;
iii) Deriving from being in the right place at the right time.”

Notice that all of these three elements are to be considered above the mere physical act of pressing a button, and in particular, you may note the third situation that can convey originality, that of being in the right place at the right time. If we believe Slater’s own telling of the story (and at the moment we do not have any other witnesses, other than the monkeys), he set the tripod, selected an angle, selected the lens aperture, checked the lighting, and was in the right place at the right time. To my mind, Slater did more than enough to be awarded copyright protection, not even considering his actions after the picture was taken, including the development of the photograph.

Concluding, there is a extremely strong argument to be made regarding originality of the monkey selfie in the UK based on these and other cases. if Slater was to sue Wikipedia in the UK, I can see a good chance that he would be given copyright over the picture.

Then again, everyone knows that monkey copyright is a bit passée. It’s all about feline copyright nowadays.

By the way, I am perfectly aware of the possible irony that I am arguing that the photograph is protected by copyright, and I’m reproducing it here. I think that I’m protected by fair dealing…


Source: Technollama

Where do things happen online? This is the eternal question of Internet regulation. While we like to think of the Internet as a global medium, increasingly we are faced with a regulatory clampdown and real-world solutions to online incidents. The latest decision dealing with online jurisdiction comes in the shape of Bolagsupplysningen OÜ and Ingrid Ilsjan v Svensk Handel AB (Case C‑194/16), an online defamation case.

The case involves Svensk Handel, the Swedish trade federation of the commercial sector, and the Estonian company Bolagsupplysningen, which  offers corporate search services and conducts its businesses mostly in Sweden. One of Svensk Handel’s functions is to provide consumer information with regards to dubious commercial practices, and it lists several websites that engage in potentially damaging and/or fraudulent practices. Svensk Handel has an entry on Bolagsupplysningen (still live at the time of writing), which warns users that the Estonian company sends out incorrect address forms to its customers, which when sent back contains a clause to sign up for a business subscription. The page has comments open (over 1600 at the time of writing), most of them of consumers criticising the Estonian company and offering their own experiences.

Bolagsupplysningen sued Svensk Handel in an Estonian court for defamation, alleging that both the information on the page and the comments were defamatory. They claim that the comments were filled with insults and even death threats to its employees. The Estonian court in first instance rejected the claim because the page was published in Sweden and it was in Swedish, so no damage could be established in Estonia; furthermore the fact that the content had been published in Estonia via the Internet did not not automatically justify an obligation to bring a case before an Estonian court. The case was appealed, and the Talinn Court of Appeal sided with the first ruling. The decision was then appealed to the Estonian Supreme Court, which decided to stay the proceedings and deferred three questions to the Court of Justice of the European Union.

  1. Can a legal person sue for the entire harm caused by infringing comments online in the country where the information was accessible?
  2. Can a legal person sue for the entire harm caused by infringing comments online in the country where the that person has its centre of interest?
  3. In case question 2 is affirmative, in which jurisdiction could the injured person seek remedies?

The CJEU answers the first question quickly in the negative by ruling that a person “cannot bring an action for rectification of that information and removal of those comments before the courts of each Member State in which the information published on the internet is or was accessible.” This is the most logical conclusion, as a positive answer would have opened the floodgates to online defamation suits in all Member States with no other connection than the fact that some information was published online. That way madness lies.

The Court merged the second and third questions, and delved on the underlying legal issue in more detail. The Court posed the legal question thus:

“…a legal person claiming that its personality rights have been infringed by the publication of incorrect information concerning it on the internet and by a failure to remove comments relating to that person can bring an action for rectification of that information, removal of those comments and compensation in respect of all the damage sustained before the courts of the Member State in which its centre of interests is located and, if that is the case, what are the criteria and the circumstances to be taken into account to determine that centre of interests.”

The previous authority in this subject from the CJEU had been eDate Advertising and Others (C‑509/09 and C‑161/10), in which it was decided that the main consideration when it came to online jurisdiction for a tort, delict or quasi-delict was to bring an action where the harmful event had taken place, or will take place, the Court was clear to interpret it broadly, and commented that this can be deemed to be as the same place where the person resides, as this is where the harm could occur the most, taking into account that the damage will be “felt most keenly at the centre of interests of the relevant person, given the reputation enjoyed by him in that place.” (at para 33). The Court explains this reasoning further:

“Thus, when the relevant legal person carries out the main part of its activities in a Member State other than the one in which its registered office is located, as is the case in the main proceedings, it is necessary to assume that the commercial reputation of that legal person, which is liable to be affected by the publication at issue, is greater in that Member State than in any other and that, consequently, any injury to that reputation would be felt most keenly there. To that extent, the courts of that Member State are best placed to assess the existence and the potential scope of that alleged injury, particularly given that, in the present instance, the cause of the injury is the publication of information and comments that are allegedly incorrect or defamatory on a professional site managed in the Member State in which the relevant legal person carries out the main part of its activities and that are, bearing in mind the language in which they are written, intended, for the most part, to be understood by people living in that Member State.”

The Court then answers the referred questions:

“The answer to the second and third questions therefore is that Article 7(2) of Regulation No 1215/2012 must be interpreted as meaning that a legal person claiming that its personality rights have been infringed by the publication of incorrect information concerning it on the internet and by a failure to remove comments relating to that person can bring an action for rectification of that information, removal of those comments and compensation in respect of all the damage sustained before the courts of the Member State in which its centre of interests is located.
When the relevant legal person carries out the main part of its activities in a different Member State from the one in which its registered office is located, that person may sue the alleged perpetrator of the injury in that other Member State by virtue of it being where the damage occurred.”

For the most part this seems like a rational decision based on the law, but not such a good ruling regarding the specifics of this case. It feels strange to give jurisdiction to a court in Estonia for a potential defamation occurring in a Swedish website, published in Swedish and dealing mostly with Swedish consumer issues, even if the company is based in Estonia. While it is understandable that the harm may occur where the person resides and conducts businesses, the harmful act itself took place in Sweden. The Court leaves this option open as well, the result being that at least in principle those affected by defamation (or other civil harm) could sue in both the country where they reside, and where they hold their centre of interest.

I for one do not see any changes to current practices, but I am willing to see what others think.


Source: Technollama

I’ve just finished reading “Kill All Normies” by Angela Nagle, a thoroughly enjoyable experience for anyone who is interested in Internet culture wars and how politics is shaping and being shaped by various online tribes. The title comes from the name given to normal people in some online chatrooms, particularly 4chan and 8chan.

I have to start by saying that I really loved this book, although it is altogether too short and left me wishing that it was longer. I have been fascinated for many years by the rise of the alt-right, a phenomenon that I have followed in gaming communities and Reddit forums, but it was very nice to see the disparate narrative of the online alt-right phenomenon brought together so well. Some of the best parts of the book come when it shines a light on various groups and events, such as elevatorgate, gamergate, and the so-called manosphere. It does so in an even handed and authoritative manner, and it is evident that Nagle has done her homework and is very familiar with the ins and outs of various movements and cultures.

The subject matter can be difficult to read, particularly some of the descriptions of abuse endured by prominent feminists such as Anita Sarkeesian, Briana Wu and Jessica Valenti. The vile nature of many of the troll armies and men’s rights vloggers is made clear without being overly preachy or excessive, and often using their own words to explain their views. These sections are a must-read for people who want to gain better understanding of how the alt-right movement originated, and how exactly did these people manage to spread so much and gain political influence.

I found the depiction of the right-wing online spaces enthralling, even though I was already quite familiar with a lot of these movements, perhaps with the exception of the Red Pill and incel communities. I was aware of them, but one look at the red pill Reddit forum left me feeling extremely sad, and I never came back. Nagle has a great talent for conveying the nature of the forums and vlogs without having to visit them, but also it is quite clear that she does not take these opinions seriously. Her style in this chapter serves as a stark contrast to this ridiculously complimentary article about so-called men’s rights in this rightly derided New York Times article, normalising and giving a platform to MGTOW groups, which are filled with known abusers. It is difficult to get the balance right in these issues, and Nagle hits the nail on the head.

While most people will read to book to find out about the alt-right, my favourite part of the book came when Nagle portrays the online left communities such as Tumblr, which have given rise to a type of identity politics that has managed to split mainstream leftist politics as well as online spaces. This is probably one of the most forceful political messages in the book, as a self-described leftist, Nagle is open in criticising some of the excesses of leftist online politics, particularly the emphasis on identity politics. This is a difficult political fight on the left right now, the split between economic left and identity left, as it feels like the identitarians have been winning by virtue of being loudest online. This is a phenomenon that was greatly described in Mark Fisher’s essay “Exiting the Vampire Castle“, and Nagle sides with Fisher’s views that online communities have become problematic in their own right. At some point an obsession with identity became one of the guiding lights of the left, but also a toxic mixture of victimhood and vicious attacks on anyone not toeing the line.

I found this chapter extremely interesting because I have to admit that during the zenith of the Tumblr left and the cultural revolution that followed I managed to elude most of the drama described in the book. I had a quick look at the silliness that was rising in some sectors and decided to cull my social timelines to filter out all such idiocy. I refuse to turn my identity into a source of victimhood, and found constant calls to check my privilege as a cis-gendered person of colour who currently identifies as male as extremely tiresome. The problem of course that right wing and the alt-right have been quite adept at using this obsession with identity to their advantage, and they constantly use this seemingly inexhaustible capacity of the left to take the bait on identity politics to dangle morsels that are always swallowed whole. To me nothing speaks more about this obsession with identity than the transgender bathroom ban debate in the United States on the run-up to last year’s election. A large part of my social timeline became obsessed with toilets, an amount of interest that is not proportional to the percentage of the population affected. I am not saying that transgender rights are not important, but to me it was evident that this obsession was detrimental because it showcased something that was cleverly exploited by the right. The left came across as degenerates obsessed with sex, while some sectors of the economy were suffering. The right took the opportunity to talk directly to those who felt rightly or wrongly alienated by the debate. In some ways, the left has been colonised by US-centric identity obsessions that have little relevance to the rest of us, and often forget important developing world struggles.

This is something that comes across quite well in the book. At some point the left forgot how to argue, and we have been losing the meme war online. Brexit, Trump and AfD have exemplified that the right takes the excesses of the left and weaponises them in the shape of memes. Nagle explains that an online culture that only responds by blocking and hiding in safe spaces has completely forgotten how to hold an argument and adequately respond when the need arises. We have been getting our arses kicked online, and people like Richard Spencer, Milo, and Steve Bannon have understood the power of populist narratives. Bannon is actually on record saying:

“The Democrats,” he said, “the longer they talk about identity politics, I got ’em. I want them to talk about racism every day. If the left is focused on race and identity, and we go with economic nationalism, we can crush the Democrats.”

I cannot really consider myself a leftist in the traditional sense any more, and this is in part because of the movements that Nagle and Fisher talk about. But the online nazis need an opposition, and if anything, I hope that Nagle’s book will help to galvanise those of us who think that we need to take the alt-right head on and beat them with their own weapons. But to do that, we need to look at the big picture and stop engaging in tiresome witch-hunts that do nothing but antagonise and alientate potential allies. The enemies are the nazis, not the economic and anarchic left.

As a gamer, I just have a small complaint about the term kek. Nagle says that the term “started on 4chan and translated to ‘lol’ in comment boards on the multiplayer videogame World of Warcraft“. This is somehow accurate, but being such an important part of video game culture, and as a WoW player, I thought that it deserves a better explanation. In WoW there are two factions, Alliance and Horde, and each speaks its own language, Common and Orcish respectively. This means that you cannot communicate with the other faction, and when someone types a chat in a space where the other faction can see it, the game translates it to look like common or orcish. So if I’m a Horde player and say “Victory or death!”, an Alliance player will read “Lok-Tar Ogar!”. Very early on, players found out that when a Horde player said “lol”, it would be read by the opposition as “kek”. This meme took off, and it is more common to say kek than lol in the game.

Just another weird meme, like Thunderfury, Blessed Blade of the Windseeker (don’t ask, it’s complicated).


Source: Technollama

monkey selfie
Public domain or animal rights?

So, the long-running legal saga starring a photographer, a monkey and an animal rights organisation has finally come to an end when both parties (not the monkey) have reached a settlement. While it is not common to learn the particulars of such an agreement, lawyers for PETA have said that the deal includes a commitment from photographer David Slater to pay 25% of all future royalty revenue to the monkey sanctuary. For some background on the events that took place before, you can read my published peer-reviewed article here, and my three earlier blog posts (one, two, and three).

This is not the resolution that us legal geeks wanted. By definition, a settlement is when the parties come to a mutual agreement, but a court does not get to decide on the point of law. So while we have a lower court decision declaring that PETA did not have legitimacy to bring the case because monkeys cannot sue for copyright infringement, we still do not have any declaration on whether an animal can have copyright. This may seem counter-intuitive given the fact that the case was appealed, but from the very start the case has seemed to have been fought mostly on technicalities and procedural issues (such as whether PETA had identified the right monkey). While I do not think that animals are capable of owning copyright, I would have liked a court to take a look at more interesting questions.

There are only 3 legal options with regards to the picture: the monkey has rights (which no court has ever declared); David Slater owns the copyright, or the picture is in the public domain, in which case everyone can use it and re-use it. With the legal situation as it stands after the settlement, it’s possible that many people won’t pay any royalties to Slater as they continue to argue that the picture is in the public domain, so the entire idea of 25% being paid to the monkey refuge seems a bit strange.

My position remains that I think that Slater owns the copyright over the picture as he did enough of a job at setting up the camera to allow the monkey to take a picture, as well as choosing the right aperture and angle, as well as selecting a number of pictures for publication, and discarding others. This is consistent with the fact that copyright is not awarded to the person who pushes the button, otherwise there would be no copyright on pictures taken with a timer. In my opinion, Slater has done enough to meet UK and European standards of originality. Because Slater is British, European standards apply; PETA brought the case in the USA for two reasons, more sympathetic courts, and the existence of the figure called “next friend”, which can be used in situations in which a person cannot bring a case on their own, for example, a child. If the case had been brought in Europe, it is my contention that Slater would have won outright.

I am aware that there are plenty of people who disagree with this view, and to this day many claim that the image is in the public domain. The settlement does nothing to change the controversy.

The legal issues in this case are fascinating because technology is increasingly allowing creations by non-humans, animals taking pictures, robots producing art, etc. In many jurisdictions, such works do not have protection, but we may have to revisit copyright legislation to bring about legal certainty.

I have a feeling that we have not heard the last from Naruto the monkey and simian copyright.


Source: Technollama

One of the most over-used (yet true) legal comparisons in Internet regulation studies is to contrast the European and US approaches to freedom of speech when it comes to cyberspace. The United States favours an almost unlimited view of freedom of speech, while Europe has put in place large caveats and balances with other rights, particularly privacy. This clash is often seen in European legislation and case law that seem to erode freedom of speech, such as bans on nazi memorabilia, curbs on hate speech, the right to be forgotten, and requirements to intermediaries to remove hateful content online.

Now the US-based civil liberties community is undergoing a serious soul searching exercise about the limits of freedom of speech after a series of events have prompted a revision of the sacrosanct First Amendment. Although the debate has exploded in the last couple of weeks, the debate has actually been going for a while now. I would argue that the current iteration of the free speech online debate gained force after gamergate, where several prominent feminists online started receiving serious online abuse and threats. It soon became clear that a lot of the abuse got a free pass from online platforms, with gamergate supporters assuming the mantle of freedom of speech (see here and here). The prevalent meme espoused by some sides in gamergate was that they were the first line of defence against the censorious so-called Social Justice Warriors, feminists, and the “PC Brigade”. Lindy West, in an article in the New York Times, explains the situation:

“[…] The anti-free-speech charge, applied broadly to cultural criticism and especially to feminist discourse, has proliferated. It is nurtured largely by men on the internet who used to nurse their grievances alone, in disparate, insular communities around the web — men’s rights forums, video game blogs. Gradually, these communities have drifted together into one great aggrieved, misogynist gyre and bonded over a common interest: pretending to care about freedom of speech so they can feel self-righteous while harassing marginalized people for having opinions.”

Various writers have lashed out against this caricaturisation of the issues, and have tried to defend a more nuanced look at free speech, particularly when it comes to online abuse. Bishakha Datta frames freedom of expression online as a conflict of power inequality, and concludes that “no one should have the right to abuse another under the guise of freedom of expression.” Soraya Chemaly writes that “when institutions tolerate sustained online bullying, abuse, and harassment, they become complicit in it.” Similarly, Sarah Jeong in her book The Internet of Garbage expresses that online abuse has various elements, and that it is not only a debate about freedom of speech.

The latest iteration of the free speech conflict started with the publication of an internal document at Google by engineer James Damore. The document reads like anti-diversity manifesto, claiming that women have differences with men that make them less likely to be efficient in the technology workplace. Damore was fired by Google, prompting an immediate backlash from free speech proponents. Then came Charlottesville, when the US woke up to the fact that there is a sizeable contingent of neo-nazis and white supremacists. I have a theory that Charlottesville shocked many sectors of the left because the people marching looked normal, just your average white dude from down the street. It became clear that the neo-nazis had been congregating and organising online with no opposition whatsoever, and now they were marching and committing a terrorist attack that took the life of counter-protester Heather Heyer. Something clicked, the penny dropped, the light-bulb went on, finally tech firms took action by removing white supremacist website Daily Stormer, Facebook and Reddit removed several hate group pages, Apple Pay removed payment facilities in several hate sites, and even Spotify banned white power tracks.

All of these actions have prompted a debate amongst the otherwise monolithic pro-freedom of expression civil liberties groups such as EFF and the ACLU. EFF has come out strongly in favour of freedom of speech online with a strongly worded condemnation of the action by tech firms removing hateful sites. Their argument is familiar, “if you tolerate this, then you might be next”; also, they rightly say that corporations should not be arbiters of what can get published online. Three Californian ACLU affiliates have stirred the pot by claiming that white supremacist violence is not freedom of speech, concluding that “the First Amendment should never be used as a shield or sword to justify violence.” The main ACLU continues to stand for freedom of speech.

From an Internet regulation perspective, this has been a very interesting week. As someone who favours the European approach to freedom of speech, what is happening in the US right now can be explained as a sudden realisation that maybe the European standards are worth a second look. I often disagree with friends and colleagues from across the pond on this very topic. A lot of people I greatly respect and admire tend to be on the free speech maximalist spectrum, while I am in favour of things like data protection, the right to be forgotten, hate speech removal, and even the criminalisation of some online practices. I do agree however that platforms and intermediaries should not have the power to unilaterally decide when to remove something, and this is where some sort of regulation comes into play.

It all comes down to a basic idea about what an open and democratic society should look like, and it is best expressed by Karl Popper in his book The Open Society and Its Enemies, in what is known as the paradox of tolerance. Popper explains:

“Unlimited tolerance must lead to the disappearance of tolerance. If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them. […] We should therefore claim, in the name of tolerance, the right not to tolerate the intolerant. We should claim that any movement preaching intolerance places itself outside the law, and we should consider incitement to intolerance and persecution as criminal, in the same way as we should consider incitement to murder, or to kidnapping, or to the revival of the slave trade, as criminal.”

This is well encapsulated in various types of legislation across Europe, particularly the banning of Nazi memorabilia, and the criminal action against online hate speech, but to a lesser extent it can be seen in other actions designed to balance rights, such as free expression and privacy. In the US, free speech is usually not subject to the same balancing act as European courts have done, and this is one of the reasons why the Court of Justice of the European Union (CJEU) recognised the so-called the right to be forgotten in the Google Spain case. The balancing act can be seen in more detail in a series of decisions by the European Court of Human Rights (ECtHR), which start with Delfi v Estonia, and culminates with MTE v Hungary. In these cases, the ECtHR had to attempt to balance the freedom of speech of news organisations and internet intermediaries, and the right to privacy of users who were abused online. In Delfi, the court erred on the side of the victim, ruling that intermediaries were under an obligation to remove abusive content online. This decision was met with criticism from freedom of expression proponents, and then the court made adjustments in MTE v Hungary. In that case the court decided that removal of content should only take place whenever there is “hate speech and direct threats to the physical integrity of individuals“. This is a high threshold that still leaves room to respect the right of freedom of expression.

In the end, the United States tech industry has already been complying with take-down of content for many years, particularly in the case of copyright infringement, terrorism, and child pornography. For years these platforms have been acting to remove pro-ISIS content whenever it is found, and there has been little pushback from free speech advocates. The difference now is that you would be extending the definition of what is considered hate speech online to include white supremacists and neo-nazis.

I do agree with EFF and many others who are suspicious of giving unilateral power to tech platforms to act as the judge, jury and executioner of online content. In my view, this is where regulation and case law could prove useful. The problem is that in a system that enshrines free speech above other rights, there will be little legal protection against abuses of those rights. In a thoughtful response to the current situation, Access Now has states:

“Freedom of expression is not an absolute right. However, governments appear too willing to obscure the most public and vocal face of hate, while failing to combat the deeper roots of racism and violence, listen to victims, and prosecute those responsible for the most heinous and violent crimes. Hate groups in the U.S. have emerged because they feel emboldened by the rhetoric of U.S. authorities, but also because the government has failed to uphold its responsibility to protect human rights, especially of minority communities.

I absolutely agree. While the European approach can be flawed and can produce bad results from time to time, I do not feel less free in Europe because of the existing checks and balances to unfettered free speech. Paraphrasing that often mentioned maxim, freedom of speech stops where the rights of others begin.


Source: Technollama

Science Hell by Tom Gauld

This may seem like an odd title given the fact that I write a blog and I also have an active social media presence. But the question of public engagement is one that does come up often in academic circles, where we are increasingly encouraged to generate impact and communicate our research to the wider community. But while many see benefits, there is still some reluctance to doing any sort of communication outside of the recognised channels of academic publishing in the shape of books and journal articles. I remember having a conversation with a colleague who admitted that they only wrote for about three or four people, as nobody else would understand or be interested in the subject. Another colleague laughed at the idea of open access and allowing publications to be read by more people as a futile exercise. She asked: “who would want to read what we write?”

This may sound elitist, but in some areas of study the intended audience is not always the wider public. In legal research, academics may write to gain impact by influencing a small number of peers or policy-makers. Or some people engage in important theoretical work that seemingly has little engagement potential, but that helps to support other work.

Nonetheless, there are plenty of us who find public engagement rewarding, and an important aspect of our research. As a person who loves to write, I find that blogging is a vital part of my academic life. I enjoy trying to convey some of what interests me as a researcher to a wider audience than that which reads a journal article. However, this attitude can often be met with suspicion from those who do not favour engagement: blogs are a distraction, lesser output, useless exercises, or as a colleague told me once, “blogging does nothing for your career”. Possibly true, but it is still a worthwhile pursuit for many.

Maintaining a blog nowadays has become difficult, with social media and one-off platforms such as Medium taking a large share of the attention. Audiences generally dwindle, and the number of blogs is therefore reduced. A viable option for engagement has become The Conversation, an independent news publisher founded by a number of UK universities which provides journalistic articles sourced from the academic and research community, edited by in-house staff and delivered direct to the public. The published articles are released under a Creative Commons licence, and can be re-published by any news source provided they attribute the author and use the attached metadata. I have started using this tool, and have now published six articles with various levels of success. These tend to be more journalistic than your average Technollama article, and they are often edited with suggestions to make the article more approachable for a wider audience. I have been satisfied by the experience, and some articles have been picked up by mainstream news organisations such as Newsweek, El Pais, Yahoo and Gizmodo.

My latest experience has a long history in the making. A couple of years ago I wrote an article for the blog entitled “Do androids dream of electric copyright? Ownership of Deep Dream images“. This was a rather successful post, so I decided to turn it into a presentation for Gikii, and then a more ambitious presentation at re:publica in Berlin (I do talk a lot with my hands). I then turned the blog post and presentation into a longer journal article which got published this year in Intellectual Property Quarterly under the title “Do Androids Dream of Electric Copyright? Comparative Analysis of Originality in Artificial Intelligence Generated Works“, and I have been very pleased with the paper’s response. This is a great example of the benefits of blogging, where you start with an idea, develop it over time, do the research, and produce a longer, high quality journal article.  Not a waste of time, as some would like to believe.

Not content with leaving it at the published article, I then sent it as an idea to The Conversation to go full circle. They picked it up and after a few suggestions we published a short version entitled “Should robot artists be given copyright protection?” I did not choose the title, but I really didn’t give it much of a thought as this is an aspect that I often leave to the experienced editors at The Conversation; one of the things about publishing a successful news article in this day and age is that the title has to have a click-bait element. Unfortunately in this case, the title does not reflect at all the content of the piece as I have never considered whether robots will be awarded copyright protection. The intent of the article was always to wonder how copyright law will react to the growing number of machine learning algorithms that are already generating artistic works. I go through the law and case law in the UK, the EU, the US and Australia, and recommend that the best approach is that taken by UK copyright law, namely to give protection to the person who made the arrangements necessary for the work to be created.

Unfortunately, some responses to the piece did not go beyond the click-bait title, and assumed that I was advocating for robot rights. A commenter in the original article wrote:

“of course they should. how else will you have job security, IP law guy?”

While the title also seriously annoyed the author of the technology blog FrogHeart, who reproduced parts of the article but seemed to concentrate on one of the examples I gave, namely that of The Next Rembrandt. I admit that The Next Rembrandt is a headline grabbing isolated example, but I go on to give many other examples of works that are being generated by machine learning.

A more thoughtful response to the piece was written by Timothy Geigner in Techdirt, but again this seems to concentrate on the click-bait title and not so much on the content. I have never proposed that copyright should be given to machines, but what happens to those works that have been generated with considerable input from a machine, as copyright law will have two options, the work will be in the public domain, or copyright will be awarded to the program creator in one way or another. Thankfully, Geigner is able to get past the title and see that this was my argument all along. However, it confuses the UK and the EU approach (they are not the same), and seems to imply that I favour rights for robots again.

This is the frustrating part of the public engagement for experts and academics. You are an expert in your field and make a long and considered argument based on the law and case law from several jurisdictions, but you run in danger of having to argue the finer points of law with a non-expert, based on a version of an argument that you never made in the first place.

So why try to engage in the first place? Why not just let the journal article speak for itself?

Despite the relatively negative experience highlighted above, I still think that publishing the article was worth it. It got published in various outlets, which gave me more coverage than the few hundreds who subscribe to academic journals on average, or the almost 200 people who have downloaded the original from SSRN. At the time of writing, the article in The Conversation has been read by almost 8,000 people, and has been tweeted 148 times, and has received 274 Facebook shares and 267 LinkedIn shares. As an academic, these are figures that would be very difficult to achieve for a journal article.

But is it worth it if you are not able to convey the complexity of the subject? I still think so, but I have to admit that my faith has been shaken.


Source: Technollama

Amongst one of the most archaic legal arguments one can engage in, the question of whether free software licences are contracts or mere licences is up there with similar arcane questions about monkeys having rights. Thankfully I seem to be a specialist in those exact types of questions.

A few years ago I became involved in this debate, but I’ll use an explanation written by Maxime Lambrecht, a researcher in Sciences Po, who has managed to explain the dichotomy really well:

“In US law, as in other common law systems, property law distinguishes between contractual licenses and “bare licenses”. A license can be considered as contractual if certain conditions are present: offer, acceptance and consideration. But a license can also be considered as a “bare license”: a unilateral legal permission by which a licensor permits a licensee to do something she would not have been allowed to do under the law. Therefore, in our case, bare licenses are “copyright licenses”, unilateral permission to use works under certain conditions, whose violation is sanctioned by copyright injunctions.
The differences between these two regimes are numerous. Unlike a contract, a single license does not require the consent of the licensee: it simply indicates the conditions where the use of the work is not a copyright infringement. If these conditions are not met, the license ceases to exist, and the offender is committing a copyright infringement. So violations of bare licenses are sanctioned under copyright’s strict liability regime, with harsh statutory damages. Moreover, while the termination of a contract is limited by law (and contractual terms), bare licenses are revocable at any time.”

In other words, whether a free licence such as the GNU General Public Licence is considered a contract or not could determine which legal regime applies to it, and whether or not it would be easy for a licensor to bring an action under copyright infringement or breach of contract. There are lots of legal implications, including damages and contract formation that could be different if one considers one or the other.

This debate has been more prevalent in the United States, and to a lesser extent in other Common Law jurisdictions such as England and Wales. This is because in these systems contract formation includes something called consideration to the normal requirements of offer and acceptance. Consideration is usually understood in these systems as a form of payment, or more accurately, something of value has to be given in return for the contract to be valid. Many early objections to open and free licensing schemes opined that these licences were invalid because there was no consideration, they were pretty much offering software for free. The argument that many of us made to defend the validity of  open licences is that many of the obligations present in the terms and conditions are enough to contain consideration (such as the copyleft clause), and also the fact that in most legal systems of the world, consideration is not part of contract formation, offer and acceptance are enough to have a valid contract.

The main ruling dealing with the contract/licence dichotomy came in the US decision of Jacobsen v Katzer, where the 9th Circuit had to decide on this very legal question for the purpose of declaring the validity of the Artistic License, an open source software licence. In that decision, the 9th Circuit judges opined that open licences contain “enforceable copyright conditions”, which means that a violation of an open source software license exposes the licensee to more than a breach of contract claim, but also to a claim of copyright infringement. This is because the contract does not exist, and therefore any use of the works covered by the licence would amount to copyright infringement.

This was hailed as a victory for open source in general, as it makes it more enforceable in many ways.

Now we have a new decision that sheds new light on the arguments, and it is in the ongoing case of Artifex Software v Hancom (the decision relates to a motion to dismiss).  The case involves plaintiff Artifex Software, the developer of Ghostcript, a popular postscript and PDF interpreter; and Hancom, a South Korean company that develops alternatives to word processors and office suites such as Word and Microsoft Office. Hancom started incorporating Ghostcript into its own product in 2013, but Ghostcript is released under the GPL, which contains certain conditions, which Hancom failed to comply with. Amongst other, the GPL requires that the licensee clearly identifies the licensor in any derivatives, as well as making the derived source code available to the community. After receiving a complaint, Hancom removed Ghostcript from its software in 2016, but the plaintiffs sued anyway for breach of contract and copyright infringement. The main argument posed by the defendant was to argue that the claimant had not made a valid breach of contract argument, and that such a claim pre-empts the copyright infringement claim.

This is at the heart of the question, and why this is yet another interesting addition to the corpus of open source software litigation. The argument by the defendants is that there is no contract, and if there is no contract there cannot be a breach of contract, ergo the licence is still valid and there is no copyright infringement. However, the court did not buy this argument. In a strong declaration that online open source licences are contracts, the court declares:

“Defendant contends that Plaintiff’s reliance on the unsigned GNU GPL fails to plausibly demonstrate mutual assent, that is, the existence of a contract. Not so. The GNU GPL, which is attached to the complaint, provides that the Ghostscript user agrees to its terms if the user does not obtain a commercial license. Plaintiff alleges that Defendant used Ghostscript, did not obtain a commercial license, and represented publicly that its use of Ghostscript was licensed under the GNL GPU (sic). These allegations sufficiently plead the existence of a contract.”

There are further considerations by the court related to pre-emption in copyright law and jurisdiction, but these are less relevant to the current post. The court denied the motion to dismiss, and the case continues.

There are a couple of interesting aspects here for future reference. Firstly, we continue to get decision after decision that declares the validity of open source licences; not only that, we get decisions about the contractual validity of the licences. It was not so long ago that I had to defend in a conference the claim made by some lawyers that free software licences were invalid. Secondly, we are starting to see that the question of whether these licences are contracts or mere licences will continue to play out in US courts, and that despite Jacobsen there is more nuance in the debate. It seems like judges will have to analyse the facts of the case in each particular situation.

However, we can rest assured that the GPL continues to be a contract where and when it really matters.

 


Source: Technollama

This question is as old as the Internet itself. It is often remarkable just how people behave differently when they are online, often under the guise of anonymity. The Internet allows us to become different people, to behave in ways that we cannot often behave in everyday lives. This can be quite positive, for people who tend to be introverted and quiet when in person, the Internet often provides a chance to be more open and to communicate without the anxiety that often accompanies social interaction. Personally, I often feel like my online persona is closer to the “real me” that I am whenever people talk to me. When people meet me for the first time, particularly if they have read me online before, my real person seems to come as a surprise.

But this difference can often be negative, the Internet is teeming with trolls, racists, misogynists, homophobes, and all sort of dubious behaviour. It seems like anonymity often brings out the worst in people.

Online identity is back as a subject of debate because of a Reddit troll going by the name “HanAssholeSolo”, who became famous after Donald Trump posted a gif he created which portrays the US president beating a person whose face has been replaced by the CNN logo. It soon emerged that Mr Solo had a history of posting openly racist and anti-Semitic content, and a CNN journalist found his real identity. When confronted with the possibility of being unmasked, Solo quickly wrote an apology denouncing his previous output, and deleted all of his old posts. He said in a now deleted apology on Reddit:

“I would also like to apologize for the posts made that were racist, bigoted, and anti-semitic. I am in no way this kind of person, I love and accept people of all walks of life and have done so for my entire life. I am not the person that the media portrays me to be in real life, I was trolling and posting things to get a reaction from the subs on reddit and never meant any of the hateful things I said in those posts. I would never support any kind of violence or actions against others simply for what they believe in, their religion, or the lifestyle they choose to have. Nor would I carry out any violence against anyone based upon that or support anyone who did.”

CNN posted the apology, and also commented that they would not be publishing his identity.

I found the apology fascinating because it is one often shared by similar online trolls whenever their real identity is in danger. The prevailing narrative amongst many in the alt-right community is that they are not really racist, they online pretend to be in a sort of shared joke that is trying to shock people, the media, the mainstream population.  They are only shitposting, it’s all for giggles, for the lulz. When uncovered, the apology is the same: “I’m not really a racist, I just play one online”.

While it is evident that online identity is tricky, and that in many ways it allows us to play with various personas and roles, I find it hard to believe such apologies, perhaps because it is a type of behaviour with which I cannot identify or empathise. I could believe being playful and trying different opinions in an online forum for a while, and a few occasions I pretended to hold a different view to that which I really espoused in order to participate in “enemy territory” forums. I also have no difficulty believing that some people who seem to be more controversial online are actually very different in real person. But I cannot believe that most people who post racist content constantly, day in and day out, are not really like that in real life.

Paraphrasing Doctor Who, how do you behave without hope, without witness, without reward? That is who you really are.


Source: Technollama