This story was first posted in 2011 and needs a refresh, but the key points remain as relevant as ever.
Text editors are a lowest common denominator for dealing with documents. That is their appeal.
Plain text always travels smoothly between applications, operating systems and devices. The same can’t be said for Word documents or anything else that uses a proprietary format.
Text is compact and efficient. It is quicker to search and easier to manage than word processor documents.
Geeks already spend large parts of their working life dealing with plain text. Text is widely used for settings and configuration files. Geeks write small programs to merge, sort and otherwise process text files.
Plain text simpler than word processors
Text editors are simpler than word processors. Many have been around for more than 40 years and have roots in pre-graphical-user-interface computing.
They use keyboard commands — writing memos and other notes this way may look scary to non-technical types, but it isn’t much of a stretch if you’ve used the same tools to handle your everyday technical tasks for a decade or more.
There’s an added bonus to text editing; the applications can bypass the computer mouse. Given mouse movements are one of the most troublesome sources of strain injury, switching to keyboard-oriented writing tools makes sense for technical types who spend hours hunched over their machines.
Ergonomics
Similar ergonomic concerns explain why some professional writers turn their backs on conventional word processors. This group has another problem: modern word processors are busy-looking. It is hard to concentrate on writing when there are so many distractions.
It is tricky, but the old Dos favourite WordPerfect 5.1 could be shoehorned into working with Windows XP. Making it work with Windows Vista is more of a challenge. A small but vibrant user community at WP Universe provides tips and even drivers to make the software work with modern operating systems and hardware.
You’d need to buy WordPerfect. Two recently developed applications channel its spirit for free. Darkroom and Q10 are both stripped down text editors designed to offer distraction-free writing.
Darkroom fussily requires Microsoft .Net 2.0, a deal breaker for some, while Q10 mainly gets on with the job, but there is some beta-software strangeness with both programs. Perhaps for now, these text-editors-Word-replacements are a trend to watch and not follow.
In the meantime, find a basic, old-fashioned text editor. If you can adapt, it could be your biggest productivity boost of the year.
Geoffrey Moore wrote Crossing the Chasm in 1991. The book is still an important sales reference for technology companies.
Moore says you can rank customers on a technology adoption scale. These customers can be companies, organisations or individuals.
There are five ranks. Moore divides the five into two clear groups and the gap between these groups is large. Or in his words, a chasm.
##Early Adopters
Moore’s first group are early adopters. They feel they must have the latest technology. This can be about prestige or perceived competitive advantage. They are willing to pay a high price to get hold of technology early.
This high price is important. Technology companies get a big margin which funds further development or marketing. The companies love early adopters.
Chasm between visionary and mainstream
The next group are visionary customers. They need a product to gain competitive advantage or control costs. They accept immature support and absorb any technology risk.
They’ll pay a premium, often less than the early adopter premium. This allows companies to develop marketing channels and support infrastructures. These are important in the next phase.
Moore’s third phase is the bulk of the market. Moore calls them early majority or pragmatic customers. They look for clear pay-offs from a technology investment. They deliver the profits and locks a technology into the mainstream.
The fourth group are reluctant adopters. They buy mature, proven technologies if there is a sensible business case. They look for commodity products.
The last group are those who may never adopt a technology. There are companies that still don’t use email, mobile phones or computerised book-keeping.
Crossing the chasm
Moore says for any technology to succeed it must cross the chasm from the first two phases and enter the third. It’s an Evil Knievel leap, many technologies can’t make it.
The bridge across the chasm might be technical. It can be about channel organisation or support infrastructure. There are political matters such as establishing a standard or it might come down to old-fashioned marketing.
To pick winners, focus on the product or technology’s ability to cross the chasm between visionary and pragmatic customers.
Besides Moore’s chasm, there are common sense ideas of price and utility.
A product which meets certain key standards can sell. The number sold depends on price and function. A lower price or more functionality means higher sales.
If the first two phases allow a maker to build in enough functionality or reduce price through economies of scale then it’s easier to cross the chasm.
Standards are successful
Standards are a further good indicator of likely success. Yet you need to read the signs.
Many so-called standards are anything but open. Accepted standards aren’t always the ones which prevail. Think of market dominating companies like Intel or Microsoft.
The standards used in a particular product or technology are not always fixed. For example, developers can change a non-standard communications protocol with a software upgrade.
Work, rest and play
Moore started out looking at business technology. The principles also apply to consumer products such as smartphones. The rules don’t change much between the suits and the open-neck shirts but their interpretation does.
Building up a head of steam to cross the chasm is harder for makers of consumer hardware. Consumers rarely look for a return on their investment in the business sense. They are less willing to pay top dollar for new products.
Complicating matters further is the way many products now straddle both markets. In some areas the consumer market influences business purchasing strategies. For example, the first customers to adopt the iPhone were consumers.
There’s a clear connection between Moore’s chasm and Gartner’s Hype Cycle. While the two look at adoption from different points of view, both recognise there is a hump to get over before a product or technology can succeed.
This post was written in 2011 when Microsoft killed its Reader software: 15 years later, the warning about proprietary formats remains more relevant than ever—and Microsoft Reader is still dead.
Microsoft’s decision to kill its Reader eBook software is no surprise.
When it launched in 2000, Microsoft Reader wasn’t bad. Reader used Microsoft’s ClearType font technology to make text more readable on the relatively low-resolution screens common at the time.
Over the years Reader has been neglected. Other eBook formats – often built around hardware – zoomed past Microsoft in terms of technology and popularity.
What happened to my eBook library
I own a small library of eBooks in Microsoft’s .lit format. Or at least I did. Only a handful of titles and only one that I paid money for.
The books in question are stored somewhere in a back-up on one of the half-dozen or so drives sitting in my home office. I haven’t looked at them in years and I haven’t even bothered to install the Microsoft Reader software on my latest Windows 7 desktop and laptop – that decision alone speaks volumes.
I probably won’t need to read those eBooks again. If I wanted to, it would be a struggle.
2026 update: It is now impossible to read those old books using standard personal computer hardware and software.
The problem with proprietary eBook technology
And that’s the hidden flaw behind all proprietary eBook technologies. They are not timeless.
The problem isn’t just data formats. I’ve documents stored on floppy disks I’ll never access again. A few years ago I threw out 3-inch floppies (a proprietary format from the early 1980s) and the older 5.25 inch discs. At one point I had 8-inch floppies. If those discs contained documents, they are lost forever.
Print books go on effectively for ever. There are many books in my physical library that are older than me. I once read a 400 year old book. Hell, scholars can read Ancient Greek documents and even older works.
Soon, it’ll be a huge mission to read something published for Microsoft Reader.
Enduring formats
While today’s popular eBook formats may last longer than Microsoft Reader, only a fool would assume they will be around for ever.
In the meantime I plan to find a way of converting .lit files to another format for when I need those books again.
_While this was originally written in 2008 and the specific problems mentioned here are history, the main point remains as relevant as ever. _
Converting documents from one format to another can be hard.
Sometimes the problem is incompatibilities between different generations of the same application. Microsoft Word 2007’s docx file format isn’t automatically readable in older version of Word.
The same is true for files generated by Excel 2007 and PowerPoint 2007.
When you know in advance a colleague uses an earlier application version, you can choose to do the polite thing and save your document in the older format. This backward compatibility is built-in to Word 2007. Most applications offer similar backward compatibility.
Backward compatibility – up to a point
This is fine in theory, but you’ll either have to remember which format each colleague can use or you’ll just have to send everything in the older format. The problem with this approach is important things in the newer document format may go missing during translation to the older format.
If someone sends you a unopenable docx file – and you’re running an older, yet still reasonably up-to-date version of Word, you’ll only be able to work with the file if you’ve downloaded the Microsoft Office Compatibility Pack. This will also work with your Excel and PowerPoint files.
Things can be harder when converting files between applications from rival software companies or between applications running on different operating systems.
Not all software companies go out of their way to may conversion simple. Dealing with ancient documents from long-deceased operating systems is almost impossible. I’ve got MS-Dos Wordperfect and Planperfect files that I can no longer read.
Text, the lowest common denominator
Some geeks by-pass conversion problems by sticking with lowest-common-denominator file formats. Just about every application on any kind of operating system or hardware device that deals with text, from supercomputers to mobile phones and mp3 players can cope with data stored as plain text (.txt) files.
Text makes sense if you don’t need to keep style formatting information such as fonts, character sizes and bold or italic characters in your documents. An alternative low-end file format allowing some basic style formatting is .rtf, the rich text format. This was originally developed by Microsoft some 20 years ago to allow documents to move between different operating systems and it is still present as an option in just about every application that uses text today.
While I can no longer read my ancient Wordperfect files, I have recently found prehistoric documents from the early 1980s when I ran the CP/M operating system and a program called WordStar. Because they were stored as text files, they are still readable.
Years of writing about technology has taught me to be more, not less, cautious about new gadgets or software.
I’m not an early adopter.
Early adopters are people who feel they must own the latest devices. They think they run ahead of the pack. They upgrade devices and software before everyone else.
Early adopters use the latest phones. They buy cars with weird features.
In the past they would queue in the wee small hours for iPhones, iPads or games consoles. There was a time when they’d go to midnight store openings to get the newest version of Microsoft Windows a few hours earlier.
You have to ask yourself why anyone would do that.
The pre-order brigade
Nowadays they are the people who order devices before they are officially available.
In practice their computers often don’t work because they are awash in beta and alpha versions of software screwing things up.
And some of their kit is, well, unfinished.
Computer makers depend on early adopters. They use them as guinea pigs.
Early adopter first to benefit, first to pay
Marketing types will tell you early adopters will buy a product first to steal a march over the rest of humanity. They claim they will be the first to reap the benefits of the new product. It will make them more productive or live more enjoyable lives.
This can be true. Yet early adopters often face the trauma of getting unfinished, unpolished products to work. Often before manufacturer support teams have learnt the wrinkles of their new products.
Some early adopters race to buy a device that turns out to be a dud and is quickly abandoned by the market and soon after by its maker.
For example, in 2015, my other web site looked at how early adopters of Microsoft’s abandoned Windows Phone were left stranded.
Paying a higher price
There’s another reason computer makers love early adopters — they pay more for technology.
New products usually hit the market with a premium price. Once a product matures, the bugs eliminated and competition appears, profit margins are slimmer.
Companies use high-paying early adopters to fund their product development.
Being an early adopter is fine if you enjoy playing with digital toys. If productivity isn’t as important to you as being cool with a certain crowd. It’s OK if you have the time and money to waste making them work. If you can afford to take a risk on a dud product.
I don’t. I prefer to let others try things first. Let computer makers and software developers iron out the wrinkles while the product proves its worth. Then I’ll turn up with my money.
This story was originally posted in September 2017.
At Reseller News, Rob O’Neill writes:
Kiwibank has booked a $90 million impairment in its software assets and flagged a major change in its SAP core banking rollout.
“Although the strategic review has not yet concluded, a potential change to how we build the core ‘back end’ IT system (CoreMod) to match the demands of the ‘future front end’ has prompted a re-assessment of the value of the work in progress since successfully migrating our batch payments to SAP,” the bank said today.
Source: Kiwibank books a $90 million impairment on software – Reseller News
You have to wonder why boards tolerate large-scale SAP projects when the failure rate is so high.
I’ve been told, off-the-record, by a number of high-ranking technology executives that dumb decisions are imposed from the top down with CIOs left to carry the can and pick up the pieces.
One recurring theme is that most of the cost and time overruns are due to extensive integration and customisation.
Make that unnecessary integration and customisation.
It is as if every bank or large business has unique, arcane and esoteric processes that can only be covered by expensive and risky software rewrites.
We know that simply isn’t true.
To think there is something magic tied up in those processes is madness. And expensive.
A smarter strategy for a bank, or any large-scale enterprise, would be to purchase off-the-shelf technology and redesign internal business processes to fit the software. Packaged software usually comes with flexible enough options and settings to cope with essential exceptions.
That’s how it works for small businesses buying accounting software from firms like Xero. Speaking of Xero:
New Zealand interactive game developers earned $203.4 million dollars during the 2019 financial year – double the $99.9m earned only two years earlier in 2017. The success comes from targeting audiences around the world and 96% of the industry’s earnings came from exports.
Technology lets us export photons in place of atoms. The idea was a common theme in my writing 25 years ago when the internet took off. It took time for the reality of this to creep up on us. Now it is happening in a big way thanks to New Zealand’s game developers.
One hundred years ago farmers would load sheep carcasses onto the, then, latest technology; refrigerator ships. These would belch smoke as they steamed to the other side of the world. It meant exporters earned foreign currency. This kick-started New Zealand on the path to, fifty years later, being one of the world’s richest countries.
Sheep carcasses, milk powder, crayfish, apples and all those other exports were made of atoms. They weighed kilograms and they needed to be physically shifted. The products would often take weeks to reach their destination by ship. There were physical risks.
Game developers sell light particles
Today, when, say, Grinding Gear Games, makes a game sale on the other side of the world, photons, tiny particles of light, race to their new home in a fraction of a second.
There’s nothing wrong with physical exports, that’s been what we’ve done for as long as anyone can remember. Yet tomorrow’s rivers of gold are going to come from exporting photons. We need to start thinking of games exports in the same way we once thought of meat or dairy exports.
The games industry’s export success reflects a broader pattern: NZ tech companies must think globally from the start, turning our small market size from a limitation into a strategic advantage."
If the game industry grows at the same pace for the next five years it could be worth a billion dollars a year by 2025. That’s still less than, say, wine or kiwifruit, but with much better margins.
The games industry exemplifies the high-value export economy Sir Paul Callaghan envisioned. Rowan Simpson’s analysis of the Callaghan legacy showed New Zealand largely failed the challenge to build innovation-driven prosperity. Yet games developers—earning 96% of revenue from exports with minimal physical infrastructure—demonstrate exactly the “exporting photons not atoms” model Callaghan championed.
Building this billion-dollar future requires a steady pipeline of skilled developers. Computer games technology degrees have long been recognized as serious career moves, offering pathways into one of New Zealand’s fastest-growing export sectors.
New Zealand’s games industry creates the exports and well-paid jobs that make government eyes light up.
To date the sector has outperformed almost everyone else. Sales double roughly every two years.
Selling photons around the world earns $20 overseas for every dollar made at home.
This export-first approach reflects how NZ startups are born global - forced by our small market to target international audiences from the outset.
Last year the industry earned $323.9 million.
Now all that is at risk.
Rowan Simpson’s recent analysis of Sir Paul Callaghan’s legacy showed New Zealand struggling to build the high-value, innovation-driven economy Callaghan envisioned. The games industry is one of the few sectors that succeeded—exporting digital products, creating well-paid jobs, doubling revenue every two years. Losing this to Australia would be another missed opportunity in a long series.
Australian land grab
Australia plans to hand video games companies a 30 to 40 per cent tax incentive.
That, says the local industry, will trigger a brain drain across the Tasman. Investment will follow in its wake.
You could view it as a land grab.
Chelsea Rapp, who chairs the New Zealand Game Developers Association says: “Any chance we had of attracting overseas studios to set up shop in New Zealand ends in 2022, and some New Zealand studios are already looking at expanding into Australia instead of expanding locally.”
The Australian government scheme gives game developers a 30 per cent refundable tax offset for production from 2022. On top of the federal money, several Australian states have their own offers which could add a further 10 per cent to the lure.
There’s a suitable vehicle
It’s common when stories like this emerge that the local industry body calls on our government to match the Australian incentives.
Yet, there is a New Zealand scheme in place that is similar to the new Australian one.
The New Zealand Screen Production Grant hands out similar sums of money to film and TV companies planning to shoot here. Most of this goes to overseas companies who move here for a while, then pack up and leave at the end.
Games companies are not able to get this grant.
Here for the longer term
The NZGDA points out that games companies are not likely to pull out immediately after completing a new production. Instead they hang around and start again, either on a sequel or a new project.
In other words, pouring money into the games sector keeps jobs and investment ticking over.
There are arguments that governments should not subsidise industries. And there is always a risk of a race to the bottom with Australia.
Almost everyone in business can make an argument why their needs deserve support.
Yet in this case the subsidies and race to a bottom risk are already in place. At least for the film sector. It doesn’t make sense to exclude the games market.
What’s more, the games industry often interacts with and swaps skills and personnel other high tech sectors. Keeping it here in New Zealand will benefit the entire home grown technology scene.
The industry’s need for skilled developers isn’t new—games technology education has long been recognized as a pathway to well-paid careers, but Australia’s tax incentives threaten to drain that talent pool.
The Lynx was interesting. It had a solid case with a keyboard — a design like the Commodore 64 and Vic-20. In those days most British microcomputers had advanced technology inside, they were rubbish on the outside. This was different.
The Lynx had a better specification than its rivals. Camputers offered a higher resolution than competitors and packed the latest ideas in the box. As my review points out, it was well-suited for machine-code programming. Computer buyers thought this was important in the early 1980s.
Camputers Lynx was late to the microcomputer party
As the Register says, the Lynx wasn’t a success. It arrived too late appearing at the end of the British microcomputer boom. And it was expensive compared with popular models. Camputers failed to attract interest from games developers. That proved fatal.
Camputers included a printer port on the back of the Lynx. I mentioned this in another story I wrote about the machine but failed to mention the printer port didn’t work.
Much to my embarrassment my boss at the time, Jack Schofield, pointed this out to me. My excuse — not a good one — is that Camputers had earlier showed me a demonstration where the machine printed text.
The demo Camputers Lynx unit must have been a non-production computer. I learnt an important lesson: don’t trust product demonstrations, trust only what you test yourself.
Anyone can download this kind of software without paying a fee. It doesn’t break any laws. You have the original developer’s permission to use it.
You can run the software, copy it and pass it on to friends and colleagues.
Free software is only part of the story. It isn’t the most important thing about open source. Yet free software is liberating.
Open source lets you look at code
What matters more is that you can look at the code used to write the software. This means you can see how the developers made the program.
If you have coding skills you can figure out what the developers did. You may be able to understand the assumptions and decisions they made when they wrote the code.
You can tinker with the code and release your own customised version.
Or perhaps you might spot a flaw or an area where the original developers could have done something better. When that happens you can send what you found to the developers and have them fix it, or you can fix it yourself and send them the improved version.
Improving software
This is how software evolves and improves over time. The same process can work with software that isn’t open, but letting everyone interested take a look speeds things up and often means better results.
When you tinker with, improve or fix open source software, you are expected to make your new version as freely available as the
original. That way others can follow your work, improve or fix it.
This is a virtuous circle.
Any piece of code can be open source. There are libraries of code snippets you can use to perform simple tasks or include in your own projects.
There are applications and even operating systems. Some of the best known software is based on open source.
Beyond free
While ‘free’ is an important part of the philosophy, there can be open source paid-for software. That is you can look at the code, but you have to pay to use it. The money is often used to pay for further development.
This approach has many of the same benefits. It means that people and companies can earn a living at the same time.
There are also many commercial and semi-commercial products and services that are build on open source foundations.
The opposite to open source software is often known as proprietary software. You can think of this as closed source. It is where someone, usually a company, owns the intellectual property. In some cases this can include patents.
As a rule you don’t get to see proprietary code and you pay to use the software. Until about 30 years ago all software was proprietary. A lot of enterprise and software used by government still is.
Open source now dominates the software world. Most of the world’s systems run on it. The web is open. Most phones run Android, which is a form of open source.
Windows 11 didn’t get a mention in last week’s look at the HP OmniBook X. That was deliberate. If HP’s, otherwise enticing, laptop has a weak spot, it is Microsoft’s operating system.
This was the first time I attempted to work using Windows 11. My previous encounters with the operating system were fleeting and shallow. I was skeptical of Windows 11 at launch, and this hands-on experience confirmed my concerns.
My next Windows 11 experience was on the Surface Laptop Studio, and once again, even excellent hardware can’t compensate for the OS’s frustrations.
When Windows switched from 7 to 8, my productivity dropped. Then I took the plunge with a MacBook. It wasn’t my first time with Apple, but that’s another story.
To say my productivity soared is putting it mildly, moving from Windows to Mac was like gaining an extra working day each week. That’s important when work pays by the word or by the hour.
Windows does some things better than MacOS. Upgrades are easier, working with third party hardware is easier. It also has a wider range of games and applications, not that any of that matters to me.
But, hear me out, it feels like Windows 11 treats users with contempt.
Notification hell
After a decade with MacOS I was shocked to see an important-looking notification appear in the bottom left hand corner of the Windows 11 display that turned out to be an advertisement. Microsoft literally interrupted my flow to direct me to where I could buy a third-party application.
This is not OK. Not in any conceivable world.
Another notification, sorry “new alert” flashed up. This might be acceptable if, say, World War III had started and I needed to head to a bunker. The ‘news’ story concerned a ‘celebrity’ I have never heard of doing something I don’t even remotely care about.
At some point, I was busy, so I didn’t take notes, a promotion for a game appeared.
This is not the future we signed up for
How can this even happen with a device that is meant to be a productivity tool?
Sure, all this can be turned off.
Actually I don’t know if it can be turned off. I’m presuming it can, but I couldn’t find where to mute these things without Googling… Except it wasn’t Google. It was Bing and Bing wasn’t forthcoming with the information.
Muting is not the point. These alerts are switched on by default. This is the Windows 11 experience Microsoft wants you to have.
Rightly or wrongly it feels as if Microsoft views Windows 11 users as a market to be milked for extra revenue at every possible opportunity.
Culture shock
This is not an Apple is better than Microsoft partisan rant. Well, not entirely. Apple pushes customers towards iCloud, Music and Apple TV among other services, but it doesn’t stop you from working in order to do this.
The point here is that after a decade away from Windows, revisiting the operating system is a culture shock. It wasn’t this way in 2012.
Before I sent the OmniBook X back to HP, I checked to see if it could run Linux as an alternative, non annoying, operating system. The official answer appears to be “not yet”. The correct answer is “Not soon enough”.
This post was written in March 2013 when Google killed Reader. It is a warning about relying on free services from big tech companies has been validated repeatedly since then. Google has killed over 200 products including Google+, Inbox, Hangouts, Stadia, Podcasts and many more. The lesson remains: sometimes free is too high a price. Updated 2025.
The company doesn’t make any money from its free web-based RSS reader, so its death doesn’t come as a surprise. After all,
Google is a business, not a charity.
Google Reader has been the best tool for that job for a long time. It has been so good that it has killed off most of its competition.
Nothing else compares
Twitter, Facebook and other social media tools simply don’t compare for this kind of work. RSS feeds provide comprehensive lists, social media tends to give a fleeting snapshot.
There are other RSS tools, none of them work as well as Google Reader. It has the best interface for quickly scanning large numbers of posts, it has decent search tools built-in.
If Google started charging for Google Reader, I’d happily pay. It would be worth the fee.
There’s a disturbing side to Google’s decision to shut Google Reader. Before Reader there was a healthy set of competing RSS readers. One by one these fell by the wayside because they were unable to compete with the search giant’s free service.
Google entered the space, wiped out the competition and now it is leaving the space.
Jamie Tanna’s post lists many good reasons to have a website. Tanna writes from a software engineer’s point of view. Many of the reasons he offers translate directly to other trades and professions.
Your own place online
A powerful reason is to own your own little patch of the online world, what people used to call cyberspace. As Tanna says your patch can be many things, a hub where people contact you, an outlet for your writing and other creative work, or a sophisticated curriculum vitae.
Now you may be thinking you can do all these things on Facebook, Twitter, Medium or Linkedin. That’s true up to a point.
Yet you don’t own those spaces. You are part of someone else’s business model. You don’t have control over how they look, you can’t even be sure they will be there in the long term.
After all, there were people who thought the same about Geocities, Google+ or MySpace in the past.
Do it yourself
Creating your own site takes time, effort and maybe a little money. It doesn’t have to take a lot of any of these things.
You’ll need to pay for a domain name… that’s roughly $20 a year. If you are hard-pressed financially there are free options with companies like WordPress. You can get a basic WordPress site up in an hour or so.
You don’t need to be a writer to own your own website. If you post things to Facebook or Twitter, use your site instead (or as well as). It could be a place for photography.
One thing you will find is that a website gives you more of a voice than you’ll get on other people’s sites.
“Some storytellers and influencers are also migrating from personal sites toward individual channels on Medium, Blogger, Twitter, Instagram, and Youtube. But there’s a risk here — those creating and sharing unique content on these channels can lose ownership of that content. And in a world where content is king, brands need to protect their identity.”
As you might expect, Morrison is keen on changing the downward trajectory for domain name registration, but he has a valid point – why would you put the fate of your business in the hands of a platform owned by someone else? Sure, use Facebook etc to engage with your customers, but why not maintain control over your own brand? It baffles me, especially as creating a website is so much easier than it used to be.
At ITP Techblog Sarah Putt sees the issue of using Facebook or another social media site as a matter of branding.
She is right. Branding is important.
Yet the issue doesn’t stop there.
A site of your own
Not owning your own domain name, your own website, means you are not master or mistress of your online destiny. It’s that simple.
If you place your trust in the big tech companies, they can pull the rug at any moment.
This isn’t scaremongering. It has happened time and again. In many cases companies have been left high and dry. Some have gone under as a result.
The big tech companies care no more about the small businesses who piggyback off their services than you care about the individual microscopic bugs living in your gut.
Media companies learned this lesson the hard way. A decade or so ago Facebook and Google have made huge efforts to woo media companies. They promised all kinds of deals.
Many of those companies that went in boots and all are now out of business. Gone. Kaput.
Pulling the plug
Google pulled the plug on services like Wave and Google+ almost overnight after persuading media companies to sign up.
Big tech companies change their rules on a whim. Some of those whims meant cutting off the ways media companies could earn revenue.
Few media companies ever made any much money from the online giants. Those who managed to survive in a fierce and hostile landscape had nowhere to go when the services eventually closed. Many sank without a trace.
Sure, you may have heard stories about people who have made money from having an online business presence on one of the tech giants’ sites. You may also have heard stories about people winning big lottery prizes. The odds are about the same.
Yes, it can be cheap, even free in some cases, to hang out your shingle on Facebook or Google. But it is never really your shingle. It’s theirs.
The case for your own domain name
On the flip side, starting your own web site is not expensive. You can buy a domain name and have a simple presence for the price of a good lunch.
It doesn’t have to be hard work. You don’t need something fancy. And let’s face it, most Facebook companies pages are nothing to write home about either.
Use WordPress. It is not expensive. There’s plenty of help around to get you started. Depending on your needs you can choose between WordPress.com or WordPress.org.
The important thing is the site is entirely your property.
I often hear one argument in favour of working with Facebook. It goes somewhere along the lines of ‘fishing where the fish swim’. It’s true, your customers probably are on Facebook. There’s nothing to stop you from going there to engage with with them… just make sure you direct them to your independent web site.
For several years now, the trend among geeks has been to abandon the RSS format. RSS, or Really Simple Syndication, is a way to queue up and serve content from the internet.
Geeks might not like RSS, but it’s an essential tool if you monitor news or need to stay up to date with developments in a subject area.
An RSS feed is a way of listing online material. There’s a feed for this site if you’re interested. It sends out a short headline and an extract for each new post. That way you can stay up to date with everything published here without needing to constantly revisit the site to check for updates.
Separate feeds
Some big sites break up their news rivers into separate feeds. At the New York Times or The Guardian you can choose to read the technology news feed. At ZDNet you can pick subject feeds or selected a feed for an individual journalist.
Sometimes you can also roll your own niche feeds from big sites by using a search term to get a list of all stories including a certain key word.
The beauty of RSS is that it is comprehensive. It misses nothing. If you go offline for a week you can pick up where you left off and catch up immediately.
RSS is comprehensive
The alternatives are social media sites like Twitter or Facebook. They are nothing like as comprehensive or as easy to manage.
Tweets go flying past in a blur on Twitter.
All the main social media sites manage your feed. They decide what you see. This means you can miss important posts as they get pushed out of sight. That doesn’t happen with RSS.
In his story David Sparks says you need to be on Twitter all the time to catch news. Make that: you need to be on Twitter all the time AND staying more alert than most people can manage.
Universal feed
The other great thing about RSS is the format is so universal. It can be as simple as raw text. You can read it on your phone, tablet, computer or anywhere at any time. You can suck it out and place it on your own web site, for instance.
There are RSS readers built into browsers, mail clients like Outlook and other standard software. Or at least there were. I haven’t checked again lately. Feedly is one of the most popular readers. This is both a website and a series of free apps. You can pay a little extra to extra features such as an ability to search feeds, tools for integrating feeds into your workflows and so on.
Not long after becoming a technology journalist I met Adam Osborne.
Osborne invented the portable computer. Let’s be honest, his computer was luggable.
We borrowed one for review.
It was obvious a portable computer would change everything. It set us on the path to the iPhone and the Samsung Galaxy phones.
Osborne was a visionary, even if he wasn’t a good businessman — the company went bust after two years.
One thing Osborne said struck a chord at the time: “Adequate is good enough”.
No fannying about
He meant engineers should get a product to the point where it was adequate then send it out the door, no fannying about making it perfect.
It’s a philosophy software companies like Google and Microsoft built fortunes on. Apple, on the other hand, fannies about making everything perfect.
Android works on the adequate is good enough premise. Netbooks were adequate for most users. So was Windows. The fuss over Windows 8 comes down to the simple idea that for many users it isn’t adequate and therefore not good enough.
Good enough
If you’re not a power user, a gamer or an Apple addict you can pick up an adequate and, therefore, good enough, laptop for well under $1000. It’ll do everything you throw at it and then some.
There should be enough change from $1000 for an adequate but good enough phone. It may not have the latest features, but it’ll meet the needs of all but the most demanding users.
None of this is an argument against buying great kit. It’s your money: spend it how you like. But remember most of the time, you don’t have to break the bank to buy tech gear.
_This post was originally published in September 2012, so it’s now about events that happened close to 45 years ago. Oddly, I can remember this very well, better than many recent products and launches. _
Jupiter Cantab’s Jupiter Ace has just turned 30. It is a curious footnote in the history of personal computing.
I still remember the Ace quite well, mainly because it was a quirky home computer. We called them home computers in the early 1980s, the term personal computers came later.
Go Forth with Jupiter Ace
While every other home computer used Basic, the Jupiter Ace used Forth.
Early home computers didn’t have disks or operating system in the modern sense – although you could store programs and data on cassette tape. They mainly had a version of the Basic language stored on Rom.
Basic is an interpreted language. Each line of code is processed or interpreted in turn rather than compiled into machine code. This made it slow.
We need to put slow needs in context here. The Jupiter Ace had an eight-bit processor running at 3.2Mhz. That is roughly 1000th the clock speed of a modern PC.
Forth is still interpreted, but it uses a different structure, so it is many times faster than Basic. It was designed to control radio telescopes, so it was idea for building computer controlled-projects. I had just built a synthesizer and had plans to use the Ace to build a drum machine.
However, it was harder to learn and much harder to understand. And, as I now know, I’m not geek enough for that kind of thing.
At the time a friend described it to me as a write-only-language. So the Ace was essentially a computer for serious programmers. That’s not me. I tried to get my head around Forth, but the Ace was soon in a cupboard somewhere collecting dust.
T
hanks to Liam Proven @lproven for spotting my name in The Register story.
“It doesn’t matter what app it is – they all try to get me to turn on notifications, again and again, so that I can come back to their service. Facebook and Instagram are the most aggressive”.
There comes a point where notifications are counter-productive. In my case I first smelled a rat with Linkedin because of the constant barrage of notification mails. The service seemed desperate to get my attention.
Sure, there can be some notifications that should stop you in your tracks. It’s possible to allow family members or important colleagues to cut through. As for the rest… they can go
I killed my LinkedIn account. Nothing bad happened. In all the years I was a member I got maybe, one small freelance writing gig from LinkedIn. Since leaving my work in-tray is as full as it was and I’ve eliminated a time-sink.
Leaving Facebook is harder. There are people who are important to me who I’m in touch with there. The don’t seem to have any alternative online life. So the account lives, but I’ve turned off all notifications. In fact I’ve turned off almost all notifications from every online service or piece of software.
The only exceptions are where I need to react fast for business reasons. And, anything relating to my immediate family.
Here’s the thing. Nothing bad has happened. If anything I’m more productive.
Notifications are often not about serving our needs, but are about someone else’s business model.
There is also a nuclear option. Choose one day a week to turn everything digital off: have a digital sabbath.
More than 20 years on, Macs and MacBooks are still not typewriters.
Yet Apple’s iPad might be.
My iPad links to an Apple Wireless Keyboard and runs iA Writer. This combination gives me the closest thing I’ve seen in 25 years of computing to an old-school manual typewriter.
For a journalist that’s a good thing.
Typewriter easy
Apple didn’t design the iPad with word processing in mind.
On its own the iPad is a poor writing tool. Although the larger on-screen keyboard makes for better typing than using a smartphone.
Yet here I am tapping away and loving the experience more than I have done since my last typewriter ribbon dried up back in the 1980s.
Have I taken leave of my senses?
Let me count the ways I love you
Three things make the iPad typewriter-like:
1. Radical simplicity.
The iPad, Apple’s Wireless Keyboard and iA Writer make for simple and distraction free writing.
There’s no mouse. That’s great because lifting hands off the keyboard to point and click is the number one cause of pain for old-school touch typists working on PCs.
Until you stop writing, the keyboard controls everything.
At the same time, the crisp serif text on a plain screen is the nearest thing to a type on a sheet of paper. Wonderful.
2. Text editor
iA Writer is a text editor. Not a word processor.
There’s nothing dancing on my screen. No pop-ups, no incoming email. At least not the way I’ve set things up.
It is just me and my words. The only word processor-like feature is the iPad’s built-in spell checker, which mainly stays out of the way.
Best of all, iA Writer doesn’t do page layout. I don’t care how my words look because I can’t tinker. That’s one less thing to worry about.
This all adds up to fast, productive writing.
3. Quick on the draw
Typewriters don’t need to warm-up, to boot or load applications. Nor does the iPad.
My normal morning practice with a laptop was to make a cup of tea while waiting for the PC to be ready for writing. The iPad is ready in seconds.
I can get my thoughts down while they are still fresh. The first 100 words or so are nailed on the iPad before I’d get started on the PC.
The best computer bits are still there
While my iPad writing combination kills the bad stuff about word processing, it keeps the best feature: The ability to go back over copy and make corrections. This was always a pain when using a typewriter.
And I send my writing to just about anywhere in the world in a matter of seconds. Try doing that with a real typewriter.
Other iPad typewriter plus points
My iPad and keyboard are a lot easier to carry than my ageing and neglected portable typewriter – and easier than my laptop.
The battery life is long. I can work a whole day without needing to find a power point.
iA Writer uses cloud storage. You can choose DropBox or Apple’s iCloud. This means my work is available to me on any computer anywhere in the world.
The Mac still might not be a typewriter, but the iPad does the job.
I wrote this for the Sydney Morning Herald in 2007. It’s now a piece of history.
If smartphones haven’t killed off traditional handheld computers yet, the day can’t be far away. Sales of non-phone Palm and PocketPC devices are stagnant or falling. There’s been nothing much in the way of new hardware for a couple of years.
Sure, but something huge was on the way.
This is a pity. I’ve found my $500 Palm T|X to be one of my most productive tools. It goes way beyond managing my contact file and calendar information.
My word, what low expectations we had in those days.
The T|X has a 3.8 inch 480 by 320 display. While you wouldn’t call it large, it’s half as big again as the screen on most smartphones.
But tiny by today’s standards.
It makes reading text, browsing web pages, viewing photographs and even watching movies a better experience than squinting at a smartphone display.
Which was true at the time.
The 128MB of built-in memory doesn’t sound much by today’s standards, yet I’ve got a dozen or so applications running on my handheld and scores of stored documents. If I need more memory, I simply slot in an SD card.
That sounds even less now.
And we’re not talking about any old documents. The T|X comes with a bundled version of Documents To Go, an application that allows you to read and, in a limited way, edit, Word or Excel files. It can also be used to read .pdfs, making it the nearest thing to an electronic book.
OK, this looks a bit daft today, but at the time the T|X was a realistic ebook reader.
The T|X’s best feature is its built-in WiFi. When I’m travelling around the city, I stop for coffee where’s there’s a free hot spot and catch up on emails. Sure you can do this anywhere with a smartphone – but the bigger screen makes a difference.
WiFi is still wonderful.
I use WiFi to sync my Palm with my desktop before leaving home and then reverse the process when I return.
This was a novelty.
The T|X isn’t perfect, text entry is clumsy and the battery won’t make it through an extended working day if the wireless is switched on. Yet, all-in-all, it manages to better the specification of smartphones in most departments. When I’m on business away from home I carry a smartphone and a T|X.
No doubt a phone manufacturer will marry the features of the T|X with a smartphone before much longer – judging by the announced specifications Apple’s forthcoming iPhone could get there first.
Sir Clive Sinclair, the inventor and entrepreneur who was instrumental in bringing home computers to the masses, has died at the age of 81.
His daughter, Belinda, said he died at home in London on Thursday morning after a long illness. Sinclair invented the pocket calculator but was best known for popularising the home computer, bringing it to British high-street stores at relatively affordable prices.
Many modern-day titans of the games industry got their start on one of his ZX models. For a certain generation of gamer, the computer of choice was either the ZX Spectrum 48K or its rival, the Commodore 64.”
My first brush with Sinclair was as an A-level student in the UK. Before he made computers, Sinclair designed a low-cost programmable calculator.
It fascinated me and, thanks to a well-paid part-time job, I managed to buy one. From memory it could only handle a few programmable steps, but it was enough to make complex calculations.
My second job after university was working as a reporter for Practical Computing magazine. I started in January 1980 and quickly became familiar with the original Sinclair ZX80 computer.
Later that year I went to the launch of the ZX81 and met Sinclair for the first time. Over the next few years he became a familiar face.
That modest, clunky ZX81 computer changed everything. Before 1981 was out, the publishing company I worked for started Your Computer magazine which focused on small, low-cost home computers. For the first few issues I was staff reporter on both titles.
The next two years were a wild roller coaster ride. An entire industry emerged and I was in the centre of it.
ZX Spectrum was Sinclair’s definitive product
For me, Sinclair’s most important product was the ZX Spectrum. It was flawed in many ways, but it could do enough to spawn a generation of entrepreneurs and get thousands of young people into computing. I still have one in my attic.
By the time the later Sinclair QL appeared, low-cost computers with decent keyboards and storage were pushing out the minimal, low-cost options Sinclair specialised in.
By now Sinclair was Sir Clive. My last brush with his business was the ill-fated C5 battery powered vehicle. It failed and Sinclair faded from sight, later the remnants of his computer business were picked up by Amstrad.
My main memories of Sinclair were his enthusiasm and his ambitions to build devices that anyone, regardless of budget, could afford.
I wrote this post in 2009 when spending one day a week offline was far less challenging than it is today. These days I might only get a day away from all digital screens every month or so.
Here’s the idea:
Set aside one day a week when you don’t switch your computer on.
A day when you don’t check mail, update Facebook or tweet.
No firing up the desktop for game playing.
It doesn’t need to be the same day every week. You may have to trim things according to needs and deadlines. You may only be able to manage one day a fortnight.
Go off-line and let the brain rest. Or, if not rest, allow it to change gear.
Take a break instead of constantly responding to incoming messages. Just let them pile up.
There’s always tomorrow.
You can de-stress. And before you say you find it stressful not being in constant touch with cyberspace, think again. You know that isn’t true.
The online world will go on without you.
Read books, chat to friends, play sport, enjoy the sunshine or bake muffins instead.
That way, when you get back online, you’ll be refreshed. It is like a mini holiday. It may sound like a cliché, but I work better after taking a day-long break from my computer.
Digital sabbath not original
The digital sabbath is not an original idea. If you are religious, the first sabbath came at the end of the first recorded week. The Biblical creation story says God rested on the seventh day.
Ancient Jews worked for six days then strictly observed the Shabbat when many everyday things were not allowed. They knew this was mentally and physically healthy.
I
first heard about the idea of a digital sabbath in an online forum – sadly I don’t recall who or where the original idea comes from.
Problems
It is harder to take even one day’s rest from the digital world if you have a smartphone, an ebook reader or if you use the computer as an entertainment hub for music and video. And you may have a job, or some other responsibilities that make going off-line difficult.
Nevertheless, I suggest you do what you can to give it a try, reconnect once a week with the analogue world.
I’m not perfect
I’d like to report I take a full day away from my computer every week. The truth is, I don’t always manage it. Although I try to schedule a full day off each week, I generally only get a couple of full-blown digital sabbaths each month.
Originally published December 2011. Updated January 2026. After 40+ years in technology journalism, this principle remains central to my work.
Why detachment matters in journalism
The percentage may have changed slightly—technology has seeped deeper into everyone’s lives since 2011—but the core principle hasn’t: maintaining distance from geek culture makes for better technology journalism.
This isn’t about lacking technical knowledge. It’s about perspective. Technology journalists serve readers, not industry insiders. The moment you write primarily for other technology enthusiasts rather than the people who actually use technology in their daily lives and work, you’ve failed your audience.
I’m not comfortable when I’m with other technology journalists who want to talk about Star Trek or Dungeons and Dragons.
To say these things don’t interest me is an understatement.
We have science fiction books on our shelves at home. Visitors to our house assume they are mine. They are not. They belong to Mrs B. And apart from her reading tastes, she is even less geeky than me.
Computers do not mean geek
Most of the points I scored on the geek test come from work. After all, I’ve spent years writing about computers and technology, I know the difference between a Rom and a Ram.
Of course, I have more than one dictionary. It’s a journalist thing – they are tools of my trade. And yes, I confess I correct people’s grammar. Editing has been my job for most of my adult life.
In the past, people have commented on my non-geek status making me the wrong person to edit a newspaper’s computer pages, run a computer magazine or write about technology.
Detached
I disagree. A level of detachment means I can make better rational decisions. I’m less tempted to air my prejudices. It means I write for ordinary people, not geeks. In fact one of the skills I’m most proud of is being able to explain tricky things in plain English.
I’m a journalist first, technology specialist second. I can – and have – written about most subjects.
And anyway, most of my work has been writing for non-geek audiences. My lack of geekiness means I can better serve their needs.
This approach proved especially valuable when covering New Zealand’s technology industry. Local companies need journalists who can explain their innovations to potential customers and investors, not just other technologists. Being able to translate technical developments into business and economic terms serves both the industry and the public better than insider jargon ever could.
The same applies when covering telecommunications regulation, business model challenges in media, or the impact of technology on society. These stories require understanding the technology, but they’re fundamentally about people, economics, and social change.
My journalism training taught me to ask “why should readers care?” before “how does this work?” That order matters. Geeks often reverse it.
Journalism first, technology second
This reader-first approach shaped how I’ve covered journalism itself. When publishers struggled with digital transformation, the story wasn’t about the technology—it was about business models, audience relationships and sustainable journalism.
When paywalls and subscriptions became necessary, the challenge wasn’t technical implementation but convincing readers of the value proposition. When ad-blocking threatened publishers, it was fundamentally about the broken relationship between readers, publishers, and advertisers.
Technology enables or constrains these developments, but it’s never the whole story. That’s why detachment from geek culture remains an asset, not a liability.
More on journalism and media:This post is part of ongoing coverage about journalism practice, business models and the craft of technology reporting:
If you were wondering why Intel is in so much trouble, this post from 2021 provides the background story: Phone processors improved to the point where they displaced Intel chips in everything else.