It’s peak Covid season, and that means that if high fives, hugs, and close contact with people we know may be out of the picture, we sure as hell are not interested in physically contacting people we’re less familiar with. Thankfully, this is our first ever wireless pandemic (phew!) and that means that contactless technology is, as the kids may say, like totally in right now.
This piece was written in curiosity after making this NYC Subway ApplePay interaction for a design challenge. I realized how little I knew about how these technologies function and I’ve never considered the interface usability heuristics that make these highly complex features a value add for users. Let’s get into it.
If you’ve never set up Apple Pay (or Google Pay for Android users) it may be time for you to consider doing so. There’s very little reason not to; it’s more secure than a physical card, cash is gross (94% of dollar bills carry pathogens) and credit card terminals are basically Coronavirus summer camps.
Apple Pay excels in the heuristics of flexibility and efficiency in that it allows users to make purchases without their wallet with a very simple and efficient flow that is likely to be faster than the classic open your wallet, select a card, swipe, put in a pin, and finally confirm the purchase. And thanks to the fact that Apple and Google Pay technology follow the heuristics of consistency and standards, there’s a very safe and reliable system under the hood.
Apple Pay uses near field communication (NFC) to communicate with credit card terminals, which works very similarly to the RFID tag technology that’s behind many swipe-free building or hotel key cards. The terminal sends a short throw radio signal which includes the transactional information and once the phone is close enough, it can receive the signal and respond with its own transactional data which can come in two different technical flavors.
This is about to get techie so slide those glasses up or skip ahead if you’re not as nerdie as me 🤓.
EMVCo stands for Europay, Mastercard, and Visa, and it is effectively a standardized encryption network and protocol that allows all credit card companies to wirelessly communicate with terminals in the same way. To make a long story short, when you set up a credit card with Apple Wallet, Apple sends your account information to the encryption network where they create an encrypted token that is cryptographically associated to your account and sent back to remain in your phone’s physical wallet chip called the secure element (SE).
It is impossible to take this encrypted token and figure out a person’s credit card account number, which is one reason why it is far safer to carry your credit card on a phone wallet. The phone sends the terminal a one time use encrypted signal for the exact value of the transaction, and the terminal relays that info to the payment network which decrypts the token, communicates with the bank to make sure the funds are good, and responds to the terminal when the purchase is cleared. All of this happens in the matter of a second 🤯.
If the terminal is old or does not have EMVCo technology equipped, then Apple Pay resorts to the next method.
This data package is equivalent to the readout a terminal would get from reading the credit card magnetic stripe however instead of using the CVV on the back of the card, it uses a dynamic CCV which changes for every transaction.
This means that the same transaction signal sent from an iPhone to a card terminal cannot be used multiple times. While a business may be able to charge a credit card multiple times, that is impossible to do with Apple or Google Pay.
Yay contact free is not only safer from germs, but it’s also much safer from fraud and account theft!
The next contactless tech feature I’m discussing is AirDrop, or Nearby Share for Android users. AirDrop is a huge usability hack for people that have multiple devices or ever have to transfer files to people nearby. If you ever find that you're emailing yourself attachments, or digging for flash drives like it’s 2005 you may want to consider using AirDrop or Nearby Share instead.
AirDrop works by using Bluetooth Low Energy to broadcast, discover, and negotiate an encrypted peer to peer wi-fi network between the two devices. Because the actual file transfer relies on wi-fi rather than bluetooth or NCF, which have considerably lower bandwidth, AirDrop transfer networks are fast and secure. And I now imagine this Avatar visual every time I AirDrop a photo from my phone to my laptop.
AirDrop gives users the heuristics of control and freedom in the way they can send files between devices. Dropping a sensitive file via AirDrop is actually more secure than sending an email because the files never live on a server which could be vulnerable to hackers.
In 2020 Android introduced Nearby Share which will send files over Bluetooth, WebRTC or peer-to-peer Wi-Fi, depending on which is faster. This feature came to Android a full 9 years after the introduction of AirDrop, and that lag is largely due to the fact that Android devices are built by many hardware manufacturers, and they’ve yet to all adopt the same sharing protocol and hardware.
Voice control is an emerging trend in usability. There are many advantages to replacing a graphical user interface (GUI) with a voice interface but that comes with challenges that designers have yet to find standardized solutions for.
Voice as an input modality is very efficient especially for users that are multitasking. One of the biggest usability problems for voice control are the gulfs of execution and evaluation. The gulf of execution refers to the fact that there is little if any affordance for voice input options. Since machine speech comprehension hasn’t been around for very long, we haven't developed the vocal equivalent to standards like ctrl+f, ctrl+c. Most people are still not exactly sure what their voice devices are capable of doing, nor do we know the exact word phrase that would allow us to use these features. The gulf of evaluation is similar but refers to the fact that results from voice input are largely unpredictable for most people. If I ask Siri to let me know when it’s my Fiancé’s birthday, will Siri create a reminder, calendar event, or neither? These types of input problems need to be addressed and standardized as voice input matures.
Voice as an output modality has a lot of weaknesses but the main one is that it’s difficult to scan information via voice output. Voice is typically a sequential output paradigm but sighted people do not browse information sequentially, they scan. The same is often true for blind users. My vision impaired friend Byron at I See Music Studios can fly through voice menus because he’s familiar with his tools to the point of memorizing each option. He only has to hear the first syllable of an interface label before he clicks the arrow to the next item. Since most people are not like Byron, and would not memorize verbal interfaces, voice will likely continue to predominantly serve as an additional input and output layer on top of traditional screen interfaces.
Most people have experience with Amazon Echo, Google Home, or Siri on HomePod, but we’re seeing a trend of third party manufacturers using voice control in devices like lighting controllers and smart thermostats. Voice controlled devices that use Amazon Voice Services (AVS) and Amazon Web Services IoT can typically ship at a cheaper price. This is because all of the language processing occurs in the cloud on a virtual voice device causing the processing hardware requirements for small voice controllers to go down. Now we’re able to speak (or yell in frustration) at more of our electronics than ever before.
There’s never been a better time to avoid contacting other people’s devices. Thanks to the positive usability heuristics of many wireless communication features, we have an incredible tool set of socially distanced tech at our disposal. the next time you touch a surface that you could have avoided with one of the usability hacks mentioned above, consider that contact-free features are not only safer but often much easier and more useful than their predecessor technologies.
Bohn, Dieter. “Android's 'Nearby Share' File Sharing Feature Is Finally Launching.” The Verge, The Verge, 4 Aug. 2020, www.theverge.com/2020/8/4/21353020/android-nearby-share-file-sharing-feature-launch-airdrop.
freeCodeCamp.org. “How Apple Pay Works Under the Hood.” FreeCodeCamp.org, FreeCodeCamp.org, 7 May 2018, www.freecodecamp.org/news/how-apple-pay-works-under-the-hood-8c3978238324/.
Friedman, Alan. “Inferior Antenna System on the IPhone Lands Apple in Court.” Phone Arena, PhoneArena, 7 Apr. 2020, www.phonearena.com/news/use-of-inferior-antenna-system-leads-to-a-class-action-suit-against-apple_id123637.
Ray, Brian. Bluetooth Vs. Bluetooth Low Energy (BLE): What's The Difference?, www.link-labs.com/blog/bluetooth-vs-bluetooth-low-energy.
RFID Basics, learn.sparkfun.com/tutorials/rfid-basics/all.