People selling technology love using words like platform, ecosystem or environment.
Almost everything in the tech world is one of the three.
Some are all three. Hence: the Windows platform; Windows ecosystem and Windows environment. Are they the same thing are are they each different?
Likewise Apple, Android, AWS and so on.
The words are a problem for trained journalists because they are non-specific, even ambiguous. They rarely help good communication. We prefer to nail things down with greater precision where possible.
Often you can replace one of these words with thing and the meaning doesn’t change.
Platform: redundant, used badly
Or you can remove the word altogether. Usually Windows, Apple and Android are good enough descriptions in their own right for most conversations.
The other problem is that the words are used interchangeably. People often talk about the Windows platform when they mean the ecosystem.
There are times when you can’t avoid using platform or ecosystem. That’s not true with environment, the word is always vague or unnecessary.
Ben Thompson offers great definitions of platform and ecosystem in The Funnel Framework:
A platform is something that can be built upon.
In the case of Windows, the operating system had (has) an API that allowed 3rd-party programs to run on it. The primary benefit that this provided to Microsoft was a powerful two-sided network: developers built on Windows, which attracted users (primarily businesses) to the platform, which in turn drew still more developers.
Over time this network effect resulted in a powerful lock-in: both developers and users were invested in the various programs that ran their businesses, which meant Microsoft could effectively charge rent on every computer sold in the world.
Ecosystem:
An ecosystem is a web of mutually beneficial relationships that improves the value of all of the participants.
This is a more under-appreciated aspect of Microsoft’s dominance: there were massive sectors of the industry built up specifically to support Windows, including value-added resellers, large consultancies and internal IT departments.
In fact, IDC has previously claimed that for every $1 Microsoft made in sales, partner companies made $8.70. Indeed, ecosystem lock-in is arguably even more powerful than platform lock-in: not only is there a sunk-cost aspect, but also a whole lot more money and people pushing to keep things exactly the way they are.
Thompson then goes on to discuss why platforms and ecosystems are no longer as important as they were in the Windows era. His point is that in the past owning the platform and ecosystem was the key to sales success, today being the best product or service for a consumer’s needs is more important.
Originally published in May 2023, this looks back at the immediate six-month aftermath of Twitter’s ownership change. Back then it was still living under its old name and facing a wave of user departures. This post surveys the emerging alternatives and what a fractured social landscape might mean. One of the alternatives, T2, didn’t make it.
When the company changed hands there were high profile predictions that it was days away from operational meltdown. Those predictions kept coming as the company laid off key workers and shut cloud services.
Twitter continues to function. There have been hiccups and outages. It may not be the smooth experience it once was. Service quality has degraded. But no sign of a meltdown.
It is not pretty. Twitter isn’t as much fun as it was. Many follow-worthy accounts have left. There is a noticeable increase in far right extremism, hate speech and unpleasant behaviour. Outright nastiness is commonplace. There’s less Twitter journalism.
Poor signal to noise ratio
In engineering terms, Twitter’s signal to noise ratio was always bad. Now it is noticeably worse.
There’s evidence Twitter’s advertising revenue has fallen off a cliff. The social media site wants to fix this by converting free users into paying customers. This does not appear to be working.
The blue tick which tells other users you are a paying customer has become a badge of shame. High profile users who got a free blue tick under the new regime complain they look bad to their followers.
Yet Twitter stumbles on.
Likewise, the early predictions of mass flight and rapidly falling numbers were overstated. There has been flight, but not on a huge scale. Estimates range from one or two per cent up to five or six per cent. It’s tangible, but not significant. At least not yet.
Mastodon
To date, Mastodon has been the most popular alternative for disgruntled Twitter users.
In the run-up to Twitter’s sale, Mastodon had around 300k active users. Soon after the sale it hit a million active users. By the end of 2022 it was north of 2.5 million active users.
At the time of writing, May 1 2023, it is back down at about half that level: around 1.2 million active users.
Mastodon monthly active users since Twitter was sold.
Incidentally, these Mastodon user number stats come from this source. They are based on data collected by the Mastodon API.
Another estimate says there are 1.4 million active users, the source for this number is @mastodonusers@mastodon.social, an automated counter.
Stats from Mastodon servers.
Rise and fall
The sharp rise and fall of Mastodon active user numbers is no surprise. Twitter users spot a degradation or witness an outrage, decide to bail, try something else, some decide that alternative doesn’t meet their expectations.
What’s important here for Mastodon is that today’s user numbers are about four times what they were six months ago. That’s impressive growth by any standard.
Impressive in Mastodon terms, but looking at Twitter numbers provides a useful reality check.
Mastodon in perspective
Mastodon’s user numbers would be a rounding error in Twitter’s user numbers.
At the time of the takeover Twitter had around 450 million active users. That means, if we are generous, that Mastodon is about half a per cent of Twitter.
You can’t make a coherent argument that Mastodon is a threat to Twitter on that basis.
Even if you look at the 11.5 million or so people who have signed up for Mastodon, it is around 2.5 per cent the size of Twitter.
Potential
Mastodon has its merits and it has potential. The idea of a Fediverse is interesting. We’ll look at that in another post. It is thriving and lively, in that sense it is, for now, the nearest thing to a viable Twitter replacement for many users.
It can work in a browser and there are plenty of apps for Mastodon users.
There are compatible services that use the underlying open ActivityPub protocol that can work well with Mastodon. Micro.blog is one example.
Another is Bluesky, a Twitter-like alternative funded by Twitter founder Jack Dorsey. That means it has a lot of attention. Possibly more attention than it deserves.
For now you need an invite to join Bluesky. It is decentralised, or soon will be, but not in the same way as Mastodon.
T2 is a Twitter reboot
A third alternative is T2, which was founded by ex-Twitter employees. It looks and feels a lot like early Twitter. That is the vibe the founders say they are aiming for. They want to focus more on community and less on building a viral, algorithmic monster that messes with users’ heads.
The interface is cleaner and there are, for now, few features. If you want to leave but enjoyed the pre-sale Twitter experience, this might be your best new online home.
T2 is young… the founders left Twitter in November. What you see today is not its finished form. Hell, the company doesn’t even have its official name yet. At the time of writing there is no app. The T2 moniker is a marker that hints at Twitter 2.0.
These things take time
Six months may feel like a long time when you’re moving at the internet’s pace, but it’s nothing when it comes to establishing a new social media service or running down an old one. That takes years.
For this reason, it is far too early to say what the post-Twitter landscape will look like. And realistically, unless Twitter collapses in a messy heap under the pressure of one too many bad leadership decisions, that service will likely continue in one form or another. To put things in perspective, did you realise MySpace continues to operate? Likewise Yahoo.
Both Mastodon and T2 look promising, if unfinished. Other alternatives are on the way. With luck we will see alternative ideas and approaches competing to be your next online home. That’s positive. The social media scene was, in that sense at least, stagnating before the Twitter sale.
In 2008 researchers at Oxford University listed the ten most irritating phrases:
1 – At the end of the day
2 – Fairly unique
3 – I personally
4 – At this moment in time
5 – With all due respect
6 – Absolutely
7 – It’s a nightmare
8 – Shouldn’t of
9 – 24/7
10 – It’s not rocket science
I’m not guilty of any of these crimes against the English language. Though I may have uttered 10 in jest a few times because the last place I worked in the UK before emigrating to New Zealand was the Rutherford Appleton Laboratory. At the time it was a rocket science establishment.
I
t’s hard to believe the compilers didn’t include ‘game changer’ which is both irritating and a cliché. Perhaps it wasn’t so overused in 2008.
My other personal hates are going forward sometimes used as moving forward and the ever-awful ‘reach out’. Do you have a phrase you hate?
Originally published in June 2014, this post looks at how Apple’s Continuity strategy — seamless hand-off of tasks and services across devices — contrasted with Microsoft’s convergence vision at the time and what it said about each company’s approach to personal computing. It remains an historic snapshot of competing philosophies, back when Microsoft still had a phone operating system.
Apple mapped the direction its technology will take at last week’s World Wide Developer Conference (WWDC).
In Apple’s world, PCs are distinct from phones and both are different from tablets.
Apple offers different devices for different parts of your life. iPhone when on the run, tablet when on the sofa, PC when at a desk or whatever else you choose.
With Apple each device class plays its own role. Hardware, software and user interfaces are optimised to take advantage of the differences.
Apple aims for integration
Apple calls this Continuity. While each device offers a different experience and there are different user interfaces, you can move smoothly between them.
This already works to a degree with Apple kit. However, Apple upped the ante at WWDC announcing changes to make for even smoother handoff as you move from one device to another.
One other thing is clear. Apple sees mobile phones as central, tablets and PCs are, in effect, secondary. This means you’re going to need an iPhone to get all the benefits of owning other Apple device.
Software features like Continuity are designed to keep users locked into Apple’s hardware cycle. It adds a new layer of utility to highly portable machines like the MacBook Air, as noted last year, had already mastered the physical requirements of ‘go-anywhere’ computing
Microsoft puts PC centre stage
Microsoft’s technology centres on the personal computer. Or, perhaps, whatever the PC becomes next.
What that means in practice is Microsoft tablets and phones are extensions of the Windows PC. The Windows you see on a desktop PC is the same, or almost the same, on a Microsoft tablet or a Windows Phone.
Microsoft talks about being consistent.
When you use Microsoft kit you can move smoothly between devices because they all look and run in much the same way. You only need to learn how to use one user interface. Up to a point, all the skill gained with one Windows device is instantly transferable to other Windows devices.
Apple, Microsoft roots
The contrasting philosophies stem from each company’s history.
Apple’s success came after realising a phone could do 90 percent of what PCs can do. It may not sell as many iOS phones as the massed ranks of Androids, but it dominates smartphones in other ways.
It also dominates the tablet market. Putting its most successful product at the core of its strategy is understandable.
Likewise, Microsoft dominates PCs. While personal computers are not growing, they are not heading for immediate extinction. Microsoft aims to have them evolve into something new.
It makes sense for Microsoft to come at 2014 technology from a PC-centric point of view.
There is no clear right or wrong here. Apple and Microsoft offer two distinct visions. They could end up at the same destination while travelling on different paths.
Triangulating Google
Apple and Microsoft have been strong in hardware and software. Services sit at the third corner of the modern personal technology triangle. That’s where Google comes from, Apple and Microsoft are only now picking up momentum in services.
Google beats both with its services. Google search, mail, online collaboration and so on are central to the company’s offering. It is a relatively late entrant into hardware and software.
For now, Google is the dominant name in personal cloud services. Because all the hard work is done remotely on massive server farms, Google sees hardware and client software as secondary. It leaves most of the hardware part of its world to partners.
The move toward a seamless experience across phone and desktop further erodes the traditional interaction model. It’s the next step in a transition in favour of more fluid, touch-based alternatives.
Choice
It would be wrong to see any one of these three strategies as better. They represent choice and your choices are clearer today than they were even six months ago.
It’s possible the three companies will diverge. It’s just as possible they’ll converge.
It sounds contradictory, but I expect a little of both. By that, I mean if one company gets a clear upper hand in any area, the other two will move to counter the threat.
Alternatively a fourth player could come along and upset the balance of power.
Either way the market is dynamic. This analysis is just a snapshot in time. It’s unlikely things will look the same 18 months from now let alone five years.
Technology companies talk up their products and technologies. Let’s not mince words — they’re hype merchants.
They hire public relations consultants and advertising agencies to whip up excitement on their behalf.
Sometimes they convince the media to follow suit and enthuse about their new gizmos or ideas.
Occasionally, the media’s constant search for hot news and catchy headlines leads to overenthusiastic praise — or a journalist swallowing a trumped-up storyline.
Recognising the hype pattern
None of this will be news to anyone in the business. What you may not know is that the IT industry’s shameless self-promotion is recognised and enshrined in Gartner’s Hype Cycle.
Gartner analysts noticed a pattern in how the world, and the media, respond to new technologies: an initial burst of excitement, followed by disillusionment, then a more balanced view.
This observation evolved into what Gartner calls the Hype Cycle, often shown as a simple curve on a graph. The horizontal axis shows time, while the vertical axis represents visibility.
Hype cycle has five phases
In the first phase, the “technology trigger”, a product launch, engineering breakthrough or some other event generates huge publicity.
At first, only a narrow audience is in on the news. They may hear about it through the specialist press and start thinking about its possibilities.
Things snowball. Before long, the idea reaches a wider audience and the mainstream media pays attention.
Interest builds until it reaches the second phase — the “peak of inflated expectations”. At this point the mainstream media becomes obsessed – you can expect to see muddle-headed but enthusiastic TV segments about the technology.
You know things have peaked when current affairs TV shows and radio presenters start covering the story.
At this point, people typically start to have unrealistic expectations. While there may be some successful applications, there are often many more failures behind the scenes.
Trough of disillusionment
Once disappointments become public, the hype cycle moves into what Gartner poetically calls the “trough of disillusionment”. The mainstream press will turn its back on the story, others will be critical. Sales may drop. The idea falls out of favour and seems unfashionable.
Some ideas sink without trace, but more often they re-emerge on the “slope of enlightenment”. This is where companies and users who persisted through the bad times come to a better understanding of the benefits on offer. By this stage, most of the media has lost interest; progress continues quietly in the background.
Finally, the cycle reaches the “plateau of productivity”, when the benefits of the technology are widely understood and accepted.
In 2008 the world was waiting for a digital device that would do for newspapers what the iPod did for music. At the time there were no obvious candidates but a few promising developments.
There were hopes that a dedicated ePaper device might fill the gap. This would be like the Kindle, but better suited for frequently updated news reports. The Kindle’s physical format was promising and its ability to display crisp, easy-to-read text. It would help if the news device could display editorial photographs.
A story in ComputerWorld looked the future of ePaper, which the author said was “just around the corner”.
ePaper looked a plausible candidate
ePaper clearly had potential. It could disrupt publishing business models which were already under attack from the internet.
Yet, at the time, ePaper is “just around the corner” was questionable. Claims like that can never be taken seriously until practical products hit the market.
I’ve been writing about technology since 1980. In that year I saw my first voice recognition system and the first example of what we now call electronic books or eBooks. The proud makers of the 1981 voice recognition device said their hardware would be “ready for prime time” within two years and keyboards would quickly be a thing of the past.
In 2008 voice recognition technology is still around two years away from prime time.
eBooks didn’t hit take-off
Likewise, in 1981 electronic book makers were confidently predicting we’d soon be cuddling up at night with their hardware. By 2008 there still wasn’t been anything as impressive or as easy to read as ink stamped or squirted on crushed, dead trees. Old fashioned books refused to die. Printed newspapers, on the other hand, appeared to be on the way out.
Another possibility at the time was the iPod-derived iPhone, which was still new in 2008. It has a tiny screen and people were skeptical about its ability to become the iPod for news.
In the meantime, the internet continued to build momentum delivering news and other information to desktops, laptops and handheld devices like Apple’s iPhone. Although none of these were anything like as satisfactory an as paper, people could use them to read news. Many had already switched to getting news that way.
The view from 2025
Looking back, the phone handset won by default due to ubiquity, not superior reading experience. Today the majority of news readers get their fix through their iPhone or Android phone.
The iPad and other tablets became a supplementary news reading device. They are ideal for immersive reading but lacking the necessary ubiquity to be the sole news reader.
It turns out all the fretting about screen quality and creating a better reading experience was focusing on the wrong problems. Yes, there are better devices for consuming text-based material, but the device in everyone’s pocket is always going to win any competition.
What was not apparent in 2008 is that publishers would adapt to the preferred format. In time the dominance of the mobile-first design model, where speed and scrolling trump the print-like page fidelity promised by ePaper.
In many cases news publishers build dedicated apps for phones and tablets. This has the added advantage of deepening their relationship with readers and increasing their ability to learn more about those readers so they can better target advertising.
New models changing: Paywalls and the creator economy
Before anyone had heard of the internet, newspapers made fortunes from physical copy sales. In the UK, the big newspapers would sell millions of copies each day. the revenue from print sales was so large that advertising barely featured in the most popular British papers.
In most of the rest of the world, newspapers were financed by advertising sales.
The transition from physical sales to digital revenue models has been hard. Up to a point it is still a work in progress. At one point the iPad model looked promising. This involved iTunes-enabled micro-transactions. Some titles still sell subscriptions this way.
Meanwhile the websites use paywalls and subscriptions as a way of charging for content. Other, smaller news operations use alternative subscription models.
Early attempts at paywalls failed. While they worked for publishers with exclusive coverage of lucrative niche markets, most obviously in business journalism, more general news publishers struggled. Major players like the New York Times and The Guardian relied on massive scale delivering readers to advertisers with high-quality, high-cost journalism.
Advertising Failure
In practice, tech giants Google and Meta (Facebook) captured nearly all the digital advertising revenue, forcing newspapers to go subscription-only to survive. The Guardian continues a free model, but carpet-bombs readers with needy promotions begging for ‘donations,’ degrading the reading experience for those unable or unwilling to pay.
Most surviving news publishers rely on traditional paywalls and subscriptions. The irony is that insisting on subscriptions gives publishers greater visibility of exactly who is reading. This information is valuable when it comes to selling better-targeted advertising.
Beyond the institutional paywall is the rise of Substack and other newsletter models. This site runs on Ghost Pro, which offers an alternative approach to online publishing and newsletters. There’s no charge here, but adding one would be relatively easy.
The rise of the independent journalist blogger
Substack and newsletters represent the true decentralised evolution of the “journalist blogger” first discussed on this site in 2008.
With it journalists can cut out the publisher and take the vast majority of the revenue.
It’s long been known that the two ways to make money off any media in the digital age are aggregation (putting things together, e.g., major news sites) and disaggregation (pulling them apart, e.g., individual newsletters).
If a journalist focuses on a high-value niche—most likely business, finance or specific areas of politics—there’s a ready market for their expertise. This is the long tail of journalism. You don’t need millions of readers to make a specialist niche pay, a thousand subscribers paying a modest sum is enough for a reasonable income.
News and journalism are not like music
Let’s go back to the start of this post, the point about “a digital device that would do for newspapers what the iPod did for music.” In some ways, the analogy is unrealistic. Today, the iPod functionality is wrapped into every iPhone. Android phones act the same way.
Music fans can buy all-you-can-eat streaming music from Spotify or Apple Music. They can also buy single tracks and albums. These models never worked for news. Instead, we have paywalls or the Patreon-Substack direct creator support model. And that brings us to the key point: The real disruption was not about the device, but the revenue model.
In 2008, one UK journalist predicted the future of news would be a “small hub of professional journalists” with citizen journalists on the periphery. He was wrong.
The distinction between the “professional journalist” and the “citizen journalist” is now obsolete. The device (the phone) was merely the delivery mechanism; the real iPod-like disruption was the technology that allowed the writer to get paid directly.
The new professional journalist is simply one who can:
Own their audience: Control the email list (Substack/Ghost).
Command a niche: Offer expertise valuable enough to justify a subscription.
The modern news landscape is not a single hub, but a decentralised network of powerful, independent creators competing with large institutions. In 2025, the writer’s brand is often stronger than the publisher’s brand. That’s a concept that was almost unthinkable when this article was first written.
**2026 update: **This post was first published in 2009, when Twitter was a relatively new and exciting social media service. Twitter has since been renamed X and the media landscape has changed significantly. The argument below reflects the context of that time.
Australian tech journalist Renai LeMay says Twitter is journalism. (The original site is dead, so no link, sorry). He is right but only up to a point.
LeMay writes:
Journalists are not simply using Twitter to promote their own work and get news tips. This is nowhere near to being the whole truth. In fact, audiences are using Twitter as a powerful tool to engage with journalists directly and force a renewal of journalism and media along lines that audiences have long demanded.
Well, some are.
I follow about 25 Australian and New Zealand journalists on Twitter. On top of that, I follow about the same number of public relations people and a handful of both from elsewhere in the world.
As an unscientific rule of thumb, I’d say only 40 per cent of journalists use the service in the way LeMay suggests.
About the same number simply use it as a way of promoting their online stories without any meaningful engagement.
Twitter journalism should not be broadcasting
In other words, they aren’t joining the conversation. Instead, they simply using Twitter as a broadcast medium.
This can be down to dumb managerial restrictions on their use of the technology. Journalists might understand social media, but their bosses don’t. Some bosses are frightened of it. Some bosses see Twitter as a competitor to their newspapers, websites, TV or radio stations.
A small percentage of journalists dabble in Twitter engagement, going on and offline depending on their workload. I understand. I’m sometimes guilty of switching off Twitter when there is a looming deadline and a huge number of words to write. It can be a distraction.
Some of the remainder are still in the dull “morning tweeps” and “I had muesli for breakfast” or the more disturbing narcissistic school of Twittering. Their social media use and their journalism don’t connect.
Apart from a handful of exceptions, it is hard to understand the attraction.
Let’s get those exceptions out of the way first.
Flyers: Ebooks are great for avid readers who are long distance flyers. The hardware weighs a few grams and is not much bigger than a phone. You can carry an entire library for less space and weight than a paperback. It’s a strong argument.
That said, I find my eyes tire much faster with an ebook than with a printed book. And, for reasons I can’t fully explain, probably to do with lighting, it’s not as relaxing if you plan to read before snoozing on the flight.
These days I carry a couple of printed books in my carry on bag and another one or two in the stowed luggage. Yes it’s heavy and takes up valuable room. I can live with that.
Textbooks: There’s a case for publishing textbooks as ebooks. Indeed, many textbooks are only available in a digital form.
When I was a student carrying three of four weighty physics books back and fourth to the university was a serious workout. An ebook, especially one that fits in a pocket makes more sense.
There’s an added bonus, it’s easy to update an electronic text book. Doing that with print is hard.
Large print: Being able to adjust the size of print so that ageing eyes can read is another argument in favour of the ebook. As the Vox story explains, this is one reason older people are keener on ebooks than younger folk.
What’s wrong with the ebook business model?
In a word: greed. It costs far less to distribute photons and atoms that mashed up dead trees sprayed with ink. There’s no manufacturing, no shipping, no shopkeepers taking a reasonable but still heft retail margin.
And yet ebook publishers ask customers to pay as much or almost as much for digital books as for printed ones. Their margin for each book is way higher than for printed books. As an aside, do authors get paid the same for digital copies?
Publishers can’t justify this.
But it gets worse. If you buy a printed book, you can hand it to someone else after you have read it. You might sell it secondhand or donate it to an op shop. Either way, it retains value after it is read. Restrictive licences mean that’s not the case with ebooks. In other words, publishers get another bonus.
Ebooks, the price isn’t right
Given all this, an ebook should cost a fraction of the price of a printed book, somewhere in the region of 10 to 20 percent. They don’t. The savings are not passed on to customers.
If ebooks were priced appropriately, they’d sell, it’s that simple. Almost everyone carries a device which could act as an ebook reader. They could do better.
The Vox story also makes a valid point about publishing and retail monopolies, which, if you think about it, also come back to greed. It turns out eBook publishing it not a money-spinner.
What could have been an ebook revolution is, in part, a victim of greed.
This post from 2013 looks at one reason why ebooks failed to break into the mainstream.
A year ago Dan Gilmor complained about greedy US publishers forcing ebook prices to climb by between 30 and 50 percent.
In the US, electronic books are now priced at the same, sometimes higher, than the hardback version of the same book. As Gilmore points out, this is a terrible deal because unlike physical books, you can’t resell, trade or give away your finished ebook.
The same dumb thinking is at work in the music and movie industries where digital media costs as much as physical media.
Physical books simply should not be cheaper than digital books
I’ve made this argument before, I’ll make it again. Printers use raw materials and machines to make physical books, CDs or DVDs. They package and ship them to warehouses before shipping again to stores.
Factories, packaging companies, shipping firms, wholesalers and retailers all clip the ticket. These are input costs and they’re not cheap, they can account for over half the retail cost.
While we can understand publishers wanting to recoup some of the cost-cutting benefit from digital media, they can’t expect to have it all. Doing so has three direct consequences:
Consumers see high prices as a rip-off. This has the knock-on effect of undermining otherwise valid moral arguments against copyright piracy.
It slows migration from the old low-margin physical model to the new higher margin model. Why would consumers choose what is still an inferior experience when the cost of hardware plus higher cost of media makes it more expensive?
Reduced sales mean set-up costs of a book, CD or DVD are spread over fewer purchases. Surely this is a time when publishers need to seed the market.
At the start of 2013 we’re at a point where the decline in printed book sales has stabilized while the hitherto triple-digit growth in ebook sales has fallen to a still impressive 34 percent. And sales of ebook readers plunged 36 percent in 2012.
So where do we go from here? Will publishers cut ebook prices sharing some of the extra margin with their customers or will they paint themselves into a corner?
The story says researchers at Norway’s Stavanger University asked people to read the same short story on a Kindle and on paper.
Those who read on paper did a better job of remembering the events than those who read on a Kindle.
A similar study looked at a school student comprehension test which showed those who read the paper document performed better than those who read digitally.
None of this surprises me, it mirrors my experience. I’ve noticed I get more from reading print than digitally. Also my eyes tire much slower with print.
If I have a serious editing or sub-editing job to do, I’ve learnt that proofreading a printed document is more accurate than working directly onscreen.
Knowing readers absorb less with digital books is unlikely to change anything. In theory nothing is likely to stop the world moving from print to pixels although publishers have plenty of scope to screw up. Yet, that aside, with e-books there’s a danger we’ll know more and understand less.
The frustrations with the format go beyond economics. While the initial debate was about the ‘ebook price swindle’, we are now finding that the proprietary nature of these files also makes deep reading and comprehension significantly harder.
I can read a printed book for hours without stopping, but struggle to last even 30 minutes with an ebook. Eye strain, poor sleep and lost focus make sustained screen reading far harder than turning real paper pages.
()
On Saturday I picked up a printed hardback novel I ordered from my local public library. When I got home I sat down to read. And read.
I read for five hours straight. On Sunday I woke early and read for another three hours without disturbing my sleeping wife.
Which is more than I can do with an ebook
Neither would have been possible with an ebook. I know, I’ve tried three specialist ebooks, Apple’s iPad 2 and an Android phone.
None work for me when it comes to a serious reading session.
I’ve found I can’t read an ebook for one whole hour, let alone five. There are three problems, two are physical, the third may be a personal failing.
Blurry vision and headaches
First, my eyes go blurry after about forty minutes. They weep. I don’t mean I’m crying, I mean water fills my eyes and runs down my cheeks. On some occasions the ebook experience also gives me headaches.
When this happens my eyes stay blurry for some time after I stop reading. At least an hour, maybe more. I can’t drive or do much that requires good vision.
This doesn’t happen with printed books.
Sleep problems from screen reading
If I read a printed book last thing before switching out the light, I can usually fall asleep minutes after hitting the pillow. If I read using a screen I struggle to sleep at all. I suspect the colour and brightness of the display has something to do with this. You may have another idea. Please share it if you do.
Losing focus with ebooks
My third problem with sustained eBook reading is I get distracted. This may be a failing on my part or it may be related to the discomfort described above. Either way, I find it hard to concentrate on an ebook. This isn’t a problem reading novels, it is a problem when I’m reading non-fiction.
I’m in a race to see whether I lose my concentration or my vision first. It turns out I’m not alone.
When I read a printed book in bed early in the morning, it doesn’t disturb my wife. When I tried reading an ebook early one morning, it woke her.
I should confess I haven’t tried a specialist ebook device in months. The technology may have improved. Perhaps I should try again. In recent weeks I’ve read books on an iPad – I took one loaded with a library on a recent trip. Yet ended up opening a printed book and sticking with it until I returned home.
This old post was written in 2011 about early e-readers and tablets. While e-ink technology and blue light filters have improved significantly, many readers still experience these issues with backlit screens. The debate between digital and print reading continues.
This story was originally posted June 2009. It remains relevant today.
People spend less time reading online news than reading printed newspapers because reading a screen is more mentally and physically taxing. For a closely related take on this see E-books harder to read, hard to comprehend.
This has consequences.
In Newspapers online – the real dilemma, Australian online media expert Ben Shepherd examined why online newspapers earn proportionately less money than print newspapers. He says it comes down to engagement. A typical online consumer of Rupert Murdoch’s products spends just 12.6 minutes a month reading News Corporation web sites. In comparison the average newspaper reader spends 2.8 hours a week with their printed copy.
Print still better in some ways
There are other factors. But I’d argue, the technology behind online reading is part of the problem:
Newspapers and magazines are typically printed at 600 dots per inch or higher resolution.
Computer screens typically display text and pictures at 72 pixels per inch. Some display at 96 dots per inch. This was the case in 2009 when the story was orignally written today’s phones typically have 300 to 500 dots per inch. Tablets are around the 200 to 300 DPI range. Laptops are 150 to 250 DPI. Desktop displays vary from 90 to 160 DPI.
Contrast is usually far better on paper than on screen.
Screens often include distracting elements. This can be particularly bad where online news sites have video or audio advertising on the same page as news stories.
Lower resolution means it takes more effort for a human brain to convert text into meaningful information. Screens are fine for relatively small amounts of text, but over the long haul your eyes and your brain will get tired faster even when there are no distractions. You’ll find it harder to concentrate and your comprehension will suffer.
Kill your notifications. Yes, really. Turn them all off. (You can leave on phone calls and text messages, if you must, but nothing else.) You’ll discover that you don’t miss the stream of cards filling your lockscreen, because they never existed for your benefit. They’re for brands and developers, methods by which thirsty growth hackers can grab your attention anytime they want.
Allowing an app to send you push notifications is like allowing a store clerk to grab you by the ear and drag you into their store. You’re letting someone insert a commercial into your life anytime they want. Time to turn it off.
This has bothered me for some time. Not least because the mental space needed to write anything more than a paragraph means turning off all notifications. I used to take this even further.
Push notifications sin-binned
It’s impossible to focus when there’s a constant barrage of calls on your attention. I go further than Pierce. For much of the time I have my phone set on silent, all computer notifications are permanently off. Everything, except system warnings to warn of a flat battery or similar.
Touch Voicemail catches messages from callers should they bother to leave one.
There are two exceptions to the clampdown. I allow text messages and voice calls from immediate family members and my clients or the people who work for them. The other exception is I allow calendar notifications to remind me if, say, I know I have to leave later for a meeting.
The downside of this is that some things get missed. It’s rare, but I have missed out on stories by putting myself in electronic purdah.
Yet on the whole, it works well. There’s always the list of missed calls, messages and so on. I can go to the notification centre scan the long, long list of missed items and realised that nothing important slipped through to the keeper.
The problem of messaging overload has only become worse since 2014, with WhatsApp, Signal, Telegram, Discord, Slack and Teams all fragmenting our communications.
Originally published July 2017. Bitcoin did crash in 2018, recovered, crashed again in 2022. Since then it reached new highs and crashed again. The bubble dynamics described here remain relevant even if the timing was off.
Finance writer and ex-banker Frances Coppola writes about financial bubbles. She says the cryptocurrency market shares characteristics with earlier bubbles like Dutch tulips and dotcom stocks.
She writes:
There are three key stages in the lifecycle of a financial bubble:
The “Free Lunch” period. A long, slow buildup of price distortion, during which investors convince themselves that rising prices are entirely justified by fundamentals, even though it is apparent to (rational) observers that they are buying castles built on sand.
The “This is nuts, when’s the crash?” period. Everyone knows prices are far out of line with fundamentals, but they carry on buying in the irrational belief they can get out before the crash they all know is coming. Speculators pile in, hoping to make a quick profit. Prices spike.
The “Every man for himself” period (sorry, FT, I couldn’t find a reference for this one). Prices crash as everyone runs for the exit. This can happen a number of times, separated by brief periods of stability when everyone congratulates themselves on a lucky escape. But they are wrong. The ship is sinking.
Which means a crash is underway. This does not only apply to Bitcoin, but to all of the cryptocurrencies.
The remarkable aspect of this is that everyone couldn’t see it coming. As Coppola points out some investors still don’t accept the likelihood of a crash.
It will be interesting to see what remains of crypto currencies after things settle down. The idea of a blockchain isn’t going away, despite it being far less useful than the hype surrounding it suggests. It could be that the, at times irrational, enthusiasm for cryptocurrency is coming to an end or it may simply be drawing breath for another bubble to form.
RSS is no longer a key content distribution channel.
Martin Belam.
He’s right in that RSS never became a mainstream means of consumption (indeed, I’d argue that it never really was a key content distribution channel), but wrong in that, for those of us who live or die by the information we find, consume and process in various ways, it’s still a vital tool.
Adam Tinworth.
RSS is not dead, it may be niche
When Google closed Google Reader there was discussion that said RSS was dead and no longer needed now that people get their feeds from social media. As Tinworth points out, there are still 15 million die-hard feed-reading users out there.
I’m one.
RSS cuts through the noise. More importantly, it helps you find information.
Social media has its uses, but with services like Twitter or Facebook, new stories go whooshing by in among all those cat pictures
and other distractions. Not only that, but a third party gets to decide what you see. In the case of most social media, that means algorithms designed to maximise the revenue earned from your attention.
A single place for finding news
If you want to check this morning’s technology news from New Zealand publishers, RSS is the only easy way to capture everything in one single spot. The alternative is to spend hours ploughing through multiple sites.
One of the disturbing aspects of Google’s decision is that it means some publishers may, stupidly, decide maintaining an RSS feed is no longer worth the both. That’s ridiculous, it is a set and forget technology. There are some publishers, or there were some in the past, who don’t appear to value the technology.
Long may the practice of creating feeds live. It’s essential for anyone who needs a comprehensive list of relevant information.
And, while I have your attention, this site has an RSS feed. You are welcome to use it.
Scientific reviews involve research, prising the back from things, taking them apart and dropping them on hard surfaces. Listening to noises. Measuring everything. Running battery life tests.
You come away from these tests with numbers. Often many numbers. Maybe you’ve heard of data journalism. This is similar, you need maths and statistics to make sense of the numbers.
Scientific reviews take time. And money. You need deep pockets to test things to breaking point.
Benchmarks
Benchmarks are one reason scientific reviews take so much time. You do them again and again to make sure. You draw up meaningful, measured comparisons with rival products. Then put everything into context.
We used the scientific approach when I ran the Australian and New Zealand editions of PC Magazine.
This was in the 1990s. ACP, the publishing company I worked for, invested in a testing laboratory.
We had expensive test equipment and a range of benchmarking software and tools. Specialist technicians managed the laboratory. They researched new ways to make in-depth comparisons, like the rest of us working there, they were experienced technology journalists.
The scientific approach to product reviews
My PC Magazine colleague Darren Yates was a master at the scientific approach. He tackled the job as if it were an engineering problem. He was methodical and diligent.
You can’t do that in a hurry.
There were times when the rest of my editorial team pulled their hair out waiting for the last tests to complete on a print deadline. We may have cursed but the effort was worth it.
Our test results were comprehensive. We knew to the microsecond, cent, bit, byte or milliamp what PCs and other tech products delivered.
There are still publications working along similar lines. Although taking as much time as we did then is rare today.
Publishing industry pressure
It’s not only the cost of operating a laboratory. Today’s publishers expect journalists to churn out many more words for each paid hour than in the past. That leaves less time for in-depth analysis. Less time to weigh up the evidence, to go back over numbers and check them once again.
At the other end of the scale to scientific reviews are once-over-lightly descriptions of products. These are little more than lists of product highlights with a few gushing words tacked on. The most extreme examples are where reviewers write without turning the device on — or loading the software.
Some reviews are little more than rehashed public relations or marketing material.
The dreaded reviewers’ guide
Some tech companies send reviewers’ guides. Think of them as a preferred template for write ups. I’ve seen published product reviews regurgitate this information, adding little original or critical.
T
hat’s cheating readers.
Somewhere between the extremes are exhaustive, in-depth descriptions. These can run to many thousands of words and include dozens of photographs. They are ridiculously nit-picking at times. A certain type of reader loves this approach.
Much of what you read today is closer to the once-over-lightly end of the spectrum than the scientific or exhaustive approach.
Need to know
One area that is often not well addressed is focusing on what readers need to know.
The problem is need-to-know differs from one audience to another. Many Geekzone readers want in-depth technical details. If I write about a device they want to know the processor, clock speed, Ram and so on.
When writing for NZ Business I often ignore or downplay technical specifications.
Readers there are more interested to know what something does and if it delivers on promises. Does it work? Does it make life easier? Is it worth the asking price?
Most of the time when I write here, my focus is on how things work in practice and how they compare with similar products. I care about whether they aid productivity more than how they get there. I like the ‘one week with this tablet ‘approach.
Beyond benchmarks
Benchmarks were important when applications always ran on PCs, not in the cloud. How software, processor, graphics and storage interact is an important part of the user experience.
While speeds and processor throughput numbers matter for specialists, most of the time they are irrelevant.
How could you, say, make a meaningful benchmark of a device accessing Xero accounts?
Ten times the processor speed doesn’t make much difference to Xero, or to a writer typing test into Microsoft Word. It is important if you plough through huge volumes of local data.
I still mention device speed if it is noticeable. For most audiences benchmarks are not useful. But this does depend on context.
Context is an important word when it comes to technology product reviews.
Fast enough
Today’s devices are usually fast enough for most apps.
Much heavy-lifting now takes place in the cloud, so line speed is often as big an issue as processor performance. That will differ from user to user and even from time to time. If, say, you run Xero, your experience depends more on the connection speed than on your computer.
Gamers and design professionals may worry about performance, but beyond their needs, there is little value in measuring raw speed these days.
Instead, I prefer exploring if devices are fit for the task. Then I write about how they fit with my work. I call this the anecdotal approach to reviewing. There has been the occasional mistake, my Computers Lynx review from 40 years ago was a learning experience.
Taking a personal approach this way is a good starting point for others to relate to their own needs.
My experience and use patterns almost certainly won’t match yours, but you can often project my experience onto your needs. I’m happy to take questions in comments if people need more information.
Review product ratings
I’ve toyed with giving products ratings in my reviews. It was standard practice to do this in print magazines. We were careful about this at PC Magazine.
A lot of ratings elsewhere were meaningless. There was a heavy skew to the top of the scale.
Depending on the scale used, more products got the top or second top ranking than any other. Few rated lower than two-thirds of the way up the scale.
So much for the Bell Curve.
If a magazine review scale ran from, say, one to five stars, you’d rarely see any product score less than three. And even a score of three would be rare. I’ve known companies to launch legal action against publications awarding three or four stars. Better than average is hardly grounds for offence, let alone litigation.
As for all those five-star reviews. Were reviewers saying a large proportion of products were perfect or near perfect? That’s unlikely. For any rating system to be meaningful you’d expect to see a lot of one or two-star ratings.
That doesn’t happen.
Loss aversion
Once I heard an advertising sales exec (not working on my publication) tell a magazine advertiser: “we only review the good stuff”.
That’s awful.
Readers need to know what to avoid as much as what to buy. Indeed, basic human nature says losses are twice as painful as gains.
Where possible, I like to warn against poor products. Companies that make poor products usually know better than to send them out for review, so you’ll see less of them, but it can happen.
My approach to reviewing products isn’t perfect. I’d like to do more scientific testing, but don’t have the time or resources. Often The review loan is only for a few days, so extensive testing isn’t possible. Reviews here are unpaid. This means reviewing has to take second place behind paying jobs.
More on media process:
SEO vs. quality – Why authority matters more than algorithms in the AI age.
This story was first posted in 2011 and needs a refresh, but the key points remain as relevant as ever.
Text editors are a lowest common denominator for dealing with documents. That is their appeal.
Plain text always travels smoothly between applications, operating systems and devices. The same can’t be said for Word documents or anything else that uses a proprietary format.
Text is compact and efficient. It is quicker to search and easier to manage than word processor documents.
Geeks already spend large parts of their working life dealing with plain text. Text is widely used for settings and configuration files. Geeks write small programs to merge, sort and otherwise process text files.
Plain text simpler than word processors
Text editors are simpler than word processors. Many have been around for more than 40 years and have roots in pre-graphical-user-interface computing.
They use keyboard commands — writing memos and other notes this way may look scary to non-technical types, but it isn’t much of a stretch if you’ve used the same tools to handle your everyday technical tasks for a decade or more.
There’s an added bonus to text editing; the applications can bypass the computer mouse. Given mouse movements are one of the most troublesome sources of strain injury, switching to keyboard-oriented writing tools makes sense for technical types who spend hours hunched over their machines.
Ergonomics
Similar ergonomic concerns explain why some professional writers turn their backs on conventional word processors. This group has another problem: modern word processors are busy-looking. It is hard to concentrate on writing when there are so many distractions.
It is tricky, but the old Dos favourite WordPerfect 5.1 could be shoehorned into working with Windows XP. Making it work with Windows Vista is more of a challenge. A small but vibrant user community at WP Universe provides tips and even drivers to make the software work with modern operating systems and hardware.
You’d need to buy WordPerfect. Two recently developed applications channel its spirit for free. Darkroom and Q10 are both stripped down text editors designed to offer distraction-free writing.
Darkroom fussily requires Microsoft .Net 2.0, a deal breaker for some, while Q10 mainly gets on with the job, but there is some beta-software strangeness with both programs. Perhaps for now, these text-editors-Word-replacements are a trend to watch and not follow.
In the meantime, find a basic, old-fashioned text editor. If you can adapt, it could be your biggest productivity boost of the year.
Geoffrey Moore wrote Crossing the Chasm in 1991. The book is still an important sales reference for technology companies.
Moore says you can rank customers on a technology adoption scale. These customers can be companies, organisations or individuals.
There are five ranks. Moore divides the five into two clear groups and the gap between these groups is large. Or in his words, a chasm.
##Early Adopters
Moore’s first group are early adopters. They feel they must have the latest technology. This can be about prestige or perceived competitive advantage. They are willing to pay a high price to get hold of technology early.
This high price is important. Technology companies get a big margin which funds further development or marketing. The companies love early adopters.
Chasm between visionary and mainstream
The next group are visionary customers. They need a product to gain competitive advantage or control costs. They accept immature support and absorb any technology risk.
They’ll pay a premium, often less than the early adopter premium. This allows companies to develop marketing channels and support infrastructures. These are important in the next phase.
Moore’s third phase is the bulk of the market. Moore calls them early majority or pragmatic customers. They look for clear pay-offs from a technology investment. They deliver the profits and locks a technology into the mainstream.
The fourth group are reluctant adopters. They buy mature, proven technologies if there is a sensible business case. They look for commodity products.
The last group are those who may never adopt a technology. There are companies that still don’t use email, mobile phones or computerised book-keeping.
Crossing the chasm
Moore says for any technology to succeed it must cross the chasm from the first two phases and enter the third. It’s an Evil Knievel leap, many technologies can’t make it.
The bridge across the chasm might be technical. It can be about channel organisation or support infrastructure. There are political matters such as establishing a standard or it might come down to old-fashioned marketing.
To pick winners, focus on the product or technology’s ability to cross the chasm between visionary and pragmatic customers.
Besides Moore’s chasm, there are common sense ideas of price and utility.
A product which meets certain key standards can sell. The number sold depends on price and function. A lower price or more functionality means higher sales.
If the first two phases allow a maker to build in enough functionality or reduce price through economies of scale then it’s easier to cross the chasm.
Standards are successful
Standards are a further good indicator of likely success. Yet you need to read the signs.
Many so-called standards are anything but open. Accepted standards aren’t always the ones which prevail. Think of market dominating companies like Intel or Microsoft.
The standards used in a particular product or technology are not always fixed. For example, developers can change a non-standard communications protocol with a software upgrade.
Work, rest and play
Moore started out looking at business technology. The principles also apply to consumer products such as smartphones. The rules don’t change much between the suits and the open-neck shirts but their interpretation does.
Building up a head of steam to cross the chasm is harder for makers of consumer hardware. Consumers rarely look for a return on their investment in the business sense. They are less willing to pay top dollar for new products.
Complicating matters further is the way many products now straddle both markets. In some areas the consumer market influences business purchasing strategies. For example, the first customers to adopt the iPhone were consumers.
There’s a clear connection between Moore’s chasm and Gartner’s Hype Cycle. While the two look at adoption from different points of view, both recognise there is a hump to get over before a product or technology can succeed.
This post was written in 2011 when Microsoft killed its Reader software: 15 years later, the warning about proprietary formats remains more relevant than ever—and Microsoft Reader is still dead.
Microsoft’s decision to kill its Reader eBook software is no surprise.
When it launched in 2000, Microsoft Reader wasn’t bad. Reader used Microsoft’s ClearType font technology to make text more readable on the relatively low-resolution screens common at the time.
Over the years Reader has been neglected. Other eBook formats – often built around hardware – zoomed past Microsoft in terms of technology and popularity.
What happened to my eBook library
I own a small library of eBooks in Microsoft’s .lit format. Or at least I did. Only a handful of titles and only one that I paid money for.
The books in question are stored somewhere in a back-up on one of the half-dozen or so drives sitting in my home office. I haven’t looked at them in years and I haven’t even bothered to install the Microsoft Reader software on my latest Windows 7 desktop and laptop – that decision alone speaks volumes.
I probably won’t need to read those eBooks again. If I wanted to, it would be a struggle.
2026 update: It is now impossible to read those old books using standard personal computer hardware and software.
The problem with proprietary eBook technology
And that’s the hidden flaw behind all proprietary eBook technologies. They are not timeless.
The problem isn’t just data formats. I’ve documents stored on floppy disks I’ll never access again. A few years ago I threw out 3-inch floppies (a proprietary format from the early 1980s) and the older 5.25 inch discs. At one point I had 8-inch floppies. If those discs contained documents, they are lost forever.
Print books go on effectively for ever. There are many books in my physical library that are older than me. I once read a 400 year old book. Hell, scholars can read Ancient Greek documents and even older works.
Soon, it’ll be a huge mission to read something published for Microsoft Reader.
Enduring formats
While today’s popular eBook formats may last longer than Microsoft Reader, only a fool would assume they will be around for ever.
In the meantime I plan to find a way of converting .lit files to another format for when I need those books again.
Google has dropped the idea that the end goal of Google Docs is to print words on a sheet of paper.
It’s been a long time coming.
When personal computers were new, word processors were all about print.
But it is now years since everyone used computers to produce printed documents. We may not have the promised paperless offices, but there is a lot less paper in the modern workplace.
These days documents usually spend all their time in a pure digital format.
Yet, until now, editing tools remain geared to print.
Word processors
Take Microsoft Word. You can’t use it for long before seeing a page break. Yes, you can use the web layout view which doesn’t have breaks. But that’s ugly to read as you put down words. And the outline view is for specialist uses.
Likewise Apple’s Pages or the Writer section of LibreOffice. They all assume you want to print documents on paper.
Dive in deeper and you’ll find word processor settings for page headers and footers. Again, these features are print-oriented.
Text editors have a digital-first perspective. But they still nod to printed pages at times.
Google Docs has offered an option not to show pages for years. Word processor software still geared to print was posted in 2014.
Google Docs part of Workspace refresh
This week Google announced sweeping changes to Workspace, a set of tools that includes Google Docs.
The big idea behind these changes is that you are no longer working to put words on paper. It’s a symbolic move. It’s a philosophical move and it’s also a practical move.
Instead, Google Docs becomes part of a bigger picture: dynamic, interactive documents that integrate with other tools. This includes embedding video, even links to video conference meetings.
The challenge for Google is that many customers liked Google Docs the way it was. They may not print much these days, but the concepts and workflows are familiar. There’s no discontinuity adapting to a fresh approach.
There’s more coming from Google. More to write about here. Yet for now, Google has untethered its popular word processor from print.
_While this was originally written in 2008 and the specific problems mentioned here are history, the main point remains as relevant as ever. _
Converting documents from one format to another can be hard.
Sometimes the problem is incompatibilities between different generations of the same application. Microsoft Word 2007’s docx file format isn’t automatically readable in older version of Word.
The same is true for files generated by Excel 2007 and PowerPoint 2007.
When you know in advance a colleague uses an earlier application version, you can choose to do the polite thing and save your document in the older format. This backward compatibility is built-in to Word 2007. Most applications offer similar backward compatibility.
Backward compatibility – up to a point
This is fine in theory, but you’ll either have to remember which format each colleague can use or you’ll just have to send everything in the older format. The problem with this approach is important things in the newer document format may go missing during translation to the older format.
If someone sends you a unopenable docx file – and you’re running an older, yet still reasonably up-to-date version of Word, you’ll only be able to work with the file if you’ve downloaded the Microsoft Office Compatibility Pack. This will also work with your Excel and PowerPoint files.
Things can be harder when converting files between applications from rival software companies or between applications running on different operating systems.
Not all software companies go out of their way to may conversion simple. Dealing with ancient documents from long-deceased operating systems is almost impossible. I’ve got MS-Dos Wordperfect and Planperfect files that I can no longer read.
Text, the lowest common denominator
Some geeks by-pass conversion problems by sticking with lowest-common-denominator file formats. Just about every application on any kind of operating system or hardware device that deals with text, from supercomputers to mobile phones and mp3 players can cope with data stored as plain text (.txt) files.
Text makes sense if you don’t need to keep style formatting information such as fonts, character sizes and bold or italic characters in your documents. An alternative low-end file format allowing some basic style formatting is .rtf, the rich text format. This was originally developed by Microsoft some 20 years ago to allow documents to move between different operating systems and it is still present as an option in just about every application that uses text today.
While I can no longer read my ancient Wordperfect files, I have recently found prehistoric documents from the early 1980s when I ran the CP/M operating system and a program called WordStar. Because they were stored as text files, they are still readable.
Years of writing about technology has taught me to be more, not less, cautious about new gadgets or software.
I’m not an early adopter.
Early adopters are people who feel they must own the latest devices. They think they run ahead of the pack. They upgrade devices and software before everyone else.
Early adopters use the latest phones. They buy cars with weird features.
In the past they would queue in the wee small hours for iPhones, iPads or games consoles. There was a time when they’d go to midnight store openings to get the newest version of Microsoft Windows a few hours earlier.
You have to ask yourself why anyone would do that.
The pre-order brigade
Nowadays they are the people who order devices before they are officially available.
In practice their computers often don’t work because they are awash in beta and alpha versions of software screwing things up.
And some of their kit is, well, unfinished.
Computer makers depend on early adopters. They use them as guinea pigs.
Early adopter first to benefit, first to pay
Marketing types will tell you early adopters will buy a product first to steal a march over the rest of humanity. They claim they will be the first to reap the benefits of the new product. It will make them more productive or live more enjoyable lives.
This can be true. Yet early adopters often face the trauma of getting unfinished, unpolished products to work. Often before manufacturer support teams have learnt the wrinkles of their new products.
Some early adopters race to buy a device that turns out to be a dud and is quickly abandoned by the market and soon after by its maker.
For example, in 2015, my other web site looked at how early adopters of Microsoft’s abandoned Windows Phone were left stranded.
Paying a higher price
There’s another reason computer makers love early adopters — they pay more for technology.
New products usually hit the market with a premium price. Once a product matures, the bugs eliminated and competition appears, profit margins are slimmer.
Companies use high-paying early adopters to fund their product development.
Being an early adopter is fine if you enjoy playing with digital toys. If productivity isn’t as important to you as being cool with a certain crowd. It’s OK if you have the time and money to waste making them work. If you can afford to take a risk on a dud product.
I don’t. I prefer to let others try things first. Let computer makers and software developers iron out the wrinkles while the product proves its worth. Then I’ll turn up with my money.
The Usborne guide was my first book and, in sales terms the most successful although not the most lucrative. I can’t find any evidence but remember it featured on some best-seller lists and total sales ran to hundreds of thousands. If you know, please get in touch.
Usborne translated the book into a number of other languages including German. The cover of that version is below and, sigh, doesn’t feature my name. I remember there were other language versions including one in Arabic, I once spotted one in a shop somewhere in Spain. There were at least three reprints of the English edition.
Oddly the picture shown at Google Books isn’t the cover but the title page from inside the book.
My other books haven’t fared so well. I wrote one about programming the Commodore Plus/4 in 1984 under the pseudonym Gordon Davis after I saw a player with the same name score a goal for Chelsea one weekend. At the time my contract didn’t allow me to write for any other titles even though the book had been written before the job started. For some reason Google added the word ‘Bitter’ in the name. I’m not sure what that’s about.
This story was originally posted in September 2017.
At Reseller News, Rob O’Neill writes:
Kiwibank has booked a $90 million impairment in its software assets and flagged a major change in its SAP core banking rollout.
“Although the strategic review has not yet concluded, a potential change to how we build the core ‘back end’ IT system (CoreMod) to match the demands of the ‘future front end’ has prompted a re-assessment of the value of the work in progress since successfully migrating our batch payments to SAP,” the bank said today.
Source: Kiwibank books a $90 million impairment on software – Reseller News
You have to wonder why boards tolerate large-scale SAP projects when the failure rate is so high.
I’ve been told, off-the-record, by a number of high-ranking technology executives that dumb decisions are imposed from the top down with CIOs left to carry the can and pick up the pieces.
One recurring theme is that most of the cost and time overruns are due to extensive integration and customisation.
Make that unnecessary integration and customisation.
It is as if every bank or large business has unique, arcane and esoteric processes that can only be covered by expensive and risky software rewrites.
We know that simply isn’t true.
To think there is something magic tied up in those processes is madness. And expensive.
A smarter strategy for a bank, or any large-scale enterprise, would be to purchase off-the-shelf technology and redesign internal business processes to fit the software. Packaged software usually comes with flexible enough options and settings to cope with essential exceptions.
That’s how it works for small businesses buying accounting software from firms like Xero. Speaking of Xero:
New Zealand interactive game developers earned $203.4 million dollars during the 2019 financial year – double the $99.9m earned only two years earlier in 2017. The success comes from targeting audiences around the world and 96% of the industry’s earnings came from exports.
Technology lets us export photons in place of atoms. The idea was a common theme in my writing 25 years ago when the internet took off. It took time for the reality of this to creep up on us. Now it is happening in a big way thanks to New Zealand’s game developers.
One hundred years ago farmers would load sheep carcasses onto the, then, latest technology; refrigerator ships. These would belch smoke as they steamed to the other side of the world. It meant exporters earned foreign currency. This kick-started New Zealand on the path to, fifty years later, being one of the world’s richest countries.
Sheep carcasses, milk powder, crayfish, apples and all those other exports were made of atoms. They weighed kilograms and they needed to be physically shifted. The products would often take weeks to reach their destination by ship. There were physical risks.
Game developers sell light particles
Today, when, say, Grinding Gear Games, makes a game sale on the other side of the world, photons, tiny particles of light, race to their new home in a fraction of a second.
There’s nothing wrong with physical exports, that’s been what we’ve done for as long as anyone can remember. Yet tomorrow’s rivers of gold are going to come from exporting photons. We need to start thinking of games exports in the same way we once thought of meat or dairy exports.
The games industry’s export success reflects a broader pattern: NZ tech companies must think globally from the start, turning our small market size from a limitation into a strategic advantage."
If the game industry grows at the same pace for the next five years it could be worth a billion dollars a year by 2025. That’s still less than, say, wine or kiwifruit, but with much better margins.
The games industry exemplifies the high-value export economy Sir Paul Callaghan envisioned. Rowan Simpson’s analysis of the Callaghan legacy showed New Zealand largely failed the challenge to build innovation-driven prosperity. Yet games developers—earning 96% of revenue from exports with minimal physical infrastructure—demonstrate exactly the “exporting photons not atoms” model Callaghan championed.
Building this billion-dollar future requires a steady pipeline of skilled developers. Computer games technology degrees have long been recognized as serious career moves, offering pathways into one of New Zealand’s fastest-growing export sectors.