Wednesday, March 11, 2015

ApplePay Insecure? The Deeper Truth

There are now a lot of "news" articles (such as this one in the LA Times) that make sensational headlines that Apple Pay is not secure.  Lately it came out that fraud rates with Apple Pay customers were "through the roof" - 6% as opposed to 1%.  At face value, it would appear that this means that Apple Pay is a much less secure form of payment than normal credit cards or other methods.  However, let's cut through the hype, the fervent Apple bashing, and get to the facts.

First, in September 2014 when Apple Pay was announced, Tim Cook claimed that it would be more secure.  So what exactly is the discrepancy?

The Old Way

To understand, we first need to understand the different payment systems, their vulnerabilities, and what is being done to combat the fraud.  Current credit cards employ one of 2 technologies:  a magnetic stripe containing card information, or an EMV (Europay / MasterCard / Visa) encryption chip.  With a magnetic stripe, if you want to pay for goods or services with your card, you have a choice of either swiping it on a reader, or reading the card information with your eyes (or from memory) to someone and having them input into a payment system.  A paper imprint with carbon paper is a tried-and-true 50-year-old method of taking payment information as well.  With EMV, when the card is scanned, the chip generates a per-transaction code that can only be used to issue payment to that merchant for that transaction.  Your personal information (card number, name, expiration date, security code, etc.) are not communicated to the merchant.  However if, say, you are paying over the phone or Internet (not in person), then you have to resort to Plan B just like the magnetic strip.

So, what are the vulnerabilities in this old system?

  1. Point of Purchase Interception
    1. If you have a dishonest clerk, or a fake card scanner attached on top of the real one, they can grab your card info and record it for later fraud.
    2. Many card scanners are done over unencrypted, open phone lines that can be tapped or intercepted.
    3. Many card scanners are done by computer, over the Internet, and can be intercepted.
  2. Information System Hacking
    1. The vast majority of large businesses, whether online or in-person purchases, enter and store your card information into their database.  This means that a hacker who breaks into the system can glean thousands or millions of contact and payment information from databases, over the Internet.  Many of these systems do not have intruder detection, so it is certainly possible that at least half of these thefts go undetected or unnoticed.
  3. Card Issuing Bank
    1. If a thief steals not your card information, but personal identification information (your name, social security number, address, bank account numbers, employer, utility account numbers, etc.), they can pose as you and open a new card that you don't know about, in your name.  With the statements sent to your address.  And run up charges that you are billed for.
Now we know this, but what has been done to combat this fraud?  Every card issuer is required to implement fraud monitoring, that learns your purchase patterns, and attempts to detect purchases made out of pattern - thus alerting people and possibly disabling the card.  Many of us have experienced going on vacation, and having our card stop working because the fraud monitoring system detects out-of-normal behavior.  However, thieves can also gain information on your spending habits (what areas are your purchases made), and sell that information to card buyers who use those cards in those areas, thus avoiding fraud monitoring for a period of time.  But these efforts have held the fraud rate at about 1% (about $1 for every $100 spent).  That is actually pretty high if you think about it - and especially in the United States where merchants are (not yet) required to provide EMV card readers.

The Apple Pay Way

So now, how does Apple Pay work?  With Apple Pay, they employ the EMV algorithm in the device (iPhone, Apple Watch, iPad, etc.).  But this is used both for online (via in-app) and in-person purchases (via Near Field Communications or NFC).  And, in order for you to use your device to pay, you must scan your fingerprint.  So, for payment, it is definitely more secure.  So why is there so much fraud with Apple Pay?  Now, the vulnerability comes to light:

  1. Card Issuing Bank / Apple Pay Trust
    1. In order to register a card with Apple Pay, Apple came up with a system whereby the card holder takes a photo of the card with the phone, then has to go into the bank to have them confirm identity and that the card holder is actually the card holder (physical presence to a physical bank employee).  Alternatively, a cardholder's device (iPhone) can be registered via Apple, and a secure iMessage sent to the previously-configured trusted device to confirm identity.  However, most banks balked at the "overhead" that would impose on their customers, and opted for a less secure authentication via phone.  A bank operator talks to the "cardholder" (hopefully it really is that person) and asks some "personal" information to confirm s/he is the cardholder, to allow that card to be registered with the device for Apple Pay.
As you can see, it is impossible to intercept and make any useful information during point of transaction.  Nor is any card or cardholder information made available to the merchant, so there is nothing insecure to store in a database that can be hacked.  However, you can probably spend about 2 seconds and think of a way around this - and the vulnerability has nothing to do with Apple.  The thieves have done nothing but think of this, so they saw this hole as well.  Through the readily available methods of obtaining personal information, the thief can register the card with their own Apple Pay through the bank if they have the card information, yes, but that is the hard way and probably may trace back to them through the device.  Easier still, is to use the stolen identity information to open up a credit card through an issuing bank, one the person whose identity is stolen knows nothing about, and then register that card with Apple Pay.  And that is exactly what is happening.

What are the methods of mitigating this?  If you have banks with lax procedures or standards for authenticating their customers (which many have), then there is very little that can be done.  Certainly you can be alerted if you monitor your credit report on all 3 bureaus, and perhaps pay for a monitoring service like Life Lock, but that actually takes some time before it makes its way through the system and becomes an alert.  And, you have to pay attention to the alerts, make a determination if the alert is because you actually did open up an account (a false alert), or if it was someone else on your behalf (a true alert).  This alert monitoring system is not as immediate or mature as the fraud monitoring in place for the old card methods - as soon as a purchase is made, the computer can flag it and call you or disable the card.

So, let's compare apples to apples (yes, pun there).  It is actually misleading to claim that 1% fraud rate, because it is a percentage of a different means of theft (card theft, not identity theft).  If you use the same measure, Apple Pay would actually be 0% - none of the Apple Pay cards are stolen.  So what is the identity theft rate without Apple Pay, of thieves opening accounts in a stolen person's identity?  That information I haven't seen.  And that is the rate you must compare to the 6% they are claiming for Apple Pay.

Tuesday, March 10, 2015

Network Overload

The total plethora of different flavors of social networking have my head spinning.  I mean, you've got Instagram.  What's that?  You take a pic, share it, comment on it.  OK, so how's that different from Facebook or Twitter?  And it seems all the other things that have popped up are some quirky flavor of, basically, texting.

So, I ask you - the general readership.  What do we need?  What do I use?  Facebook, Twitter (kind of), LinkedIn, and Google Plus (kind of).
  • Facebook - why do I use it?  Because that's where everyone is.  Think Myspace, but nowadays.  (Or maybe Myspace is a dated reference - who uses that anymore?) I would be perfectly happy to drop it completely, but most everyone has a FB account.
  • Twitter - I don't use it, but I do "cc" tweets via a multi-network publishing service.  Why?  Because I'm a tech geek, so I must be on twitter.  I do get Twitter's advantages, its value proposition if you will.  But it's not for me.
  • LinkedIn - it's a business Facebook.  That works.  Everyone is on it, so I'm on it.
  • Google Plus - Like Twitter, I really don't use it.  But I do.  It's the platform I'd like to use instead of Facebook, but let's face it, not everyone's on it.  But it's too important to ignore.
  • Microsoft Skype - I use Skype for voice/video, so it has a chat feature that is convenient at times.
  • Microsoft Lync / Messenger / Communicator (whatever you want to call it), we use it at work.  That's it.
  • iMessage - I use heavily, since a LOT of people have Apple products.
  • SMS - I use for mobile texting (and from my PC)
Then, we have sooooo many others who seem to be popular, seem to be doing well (as businesses, as startups), but where I totally fail to see any value they bring, or any hope of lasting beyond the whim of the moment.  Tell me I'm wrong, but I just don't get it.

Snap Chat - Instagram that disappears seconds after you see it.  So?
Fire Chat - some loopy chat program that links up by daisy chaining nearby so you don't have to have cellular, WiFi, or Bluetooth, but at least one, to be able to "tweet" anywhere in the world.
Salesforce - yeah, there's a chat in Salesforce.  Why??  Seriously, like we don't have anything else?
Yammer - some LinkedIn wannabe.

So, there is a lot of yammer out there - a lot of noise.  But what value is it bringing our lives?  Twitter, it seems, is the way to disseminate news - especially from suppressive societies.  Facebook is the premier socialization network, with Google Plus a runner up.  And LinkedIn is the premier business network.  Other than that, what else should there be?

Now I don't want to suggest that we stifle innovation, or that we close off any other possibilities.  However, how much can we really take?  What fulfills actual need?  Do we really need "After School" to be abused and contribute to anonymous threats, intimidation, and bullying?  I leave it to you, the using public, to decide.  How many networks do we need to be in?  Should we be in?  To me, it seems totally crazy to get embroiled in so many that I spend my real life digitally interacting instead of dealing with the people important in my life, who are sitting right next to me at the table or on the couch.  And I find it disgusting and reprehensible to have people over for a holiday celebration, and have them be "networking" on their mobile devices.  I'm thinking, check the mobile devices in at the door ("the Sheriff don't allow no weapons on this here premises, leave your guns at the door!").

Thoughts?  Criticism?  Evangelizing for a little-known networking platform?  Let me know.

Wednesday, March 4, 2015

Google is the new IBM


A Tangent

This is the 128th post on my blog, and as anyone in computer science knows, that is significant.  128 is 2 to the 7th power, or exactly half of a Byte (256).

The History

It has happened again - the world has gone full circle.  The world of computing, that is.  In the 1980's, as we moved from the IBM PC and Apple II into a full-blown explosion of the ubiquity of computing, IBM and Microsoft developed what became a baseline for computers.  We used to remark that "everyone and their brother" had a brand of computers, and there were a plethora of components so you could build your own.  Every DOS- and Windows-compatible computer was called an "IBM Compatible," but just as often it was also called an "IBM PC" - even the ones made by others (Brother, Epson, Gateway, and a thousand other names).

How did this come about?  Microsoft was just beginning, when Bill Gates bought the rights to DOS.  He then held an exclusive copyright (vigorously defended), and sold usage rights to hardware manufacturers like Tandy, IBM, Compaq, and others.  Since IBM was the biggest company at the time in that field (and a pioneer and innovator in the hardware), it became the best-known brand name.  Note that Apple devised the first "all-in-one" computer - screen, keyboard, CPU, all together instead of a separate kit components.  This was Steve Jobs' invention, and it revolutionized personal computing, but IBM perhaps more than anyone capitalized on it.

(Side Note) Ironically enough, at the time the reason IBM developed the Personal Computer, was because they felt that they needed a product that would prove, by how limited it was, that the only way to do computing was to get a real computer - a mainframe.  The PC would be a showcase in failure, and then when it did, it would morph into a terminal for the mainframe (and AS/400 mini).

As prices came down due to technology and manufacturing improvements, and competition with a myriad of manufacturers - the public bought into small-system computing in a big way.  IBM was a household name even before this revolution (think typewriters - before our first computer, we had an IBM Selectric that I spent hours hammering out my various writings on), so they cemented their name during this phase.  Everybody and their brother was licensing technology not just from Microsoft, but also from IBM - as they patented many inventions and advancements in personal computing.

Along came Google - a company named after the mathematical number named by the mathematician's kid's babbling toddler word.  The search engine company who was really the advertising company.  We all know their story, they got huge, fast.  Branch into web browsing, and now the mobile smart device market opens up.  Apple revolutionizes yet another market, and Google decides to venture into it with a mobile OS - Android.

Now, the various aspects of Google come together - web browser, search engine, advertising, and a mobile OS.  This is a massive synergistic environment designed to make money off of finding your personal habits and selling that information to advertisers.  But like smart cookies (yes, pun intended), they make their Android OS open-source.  This follows in the long tradition of UNIX, developed by AT&T Bell Laboratories (think Dennis Ritchie), and initially made open source.  Many different branches of UNIX abound, and indeed, a large sub-branch from the main trunk that re-opened the "open sourceness" of UNIX - Linux.  At this point, it's fair to say that thousands of different flavors of UNIX are in continuous use today - if you include all its derivatives like all the Linux branches, Mac OS X, and derivative of OS X - iOS, CarPlay OS, and whatever they call the Apple TV OS.

So, Google has entered into the fray with a very successful mobile OS, Android.  Many other people have "branched" or "forked" it by downloading the source code, modifying it, and building their own version of Android.  Now, various downstream versions of Android are running on a plethora of machines - not just phones and tablets, but also embedded devices, set-top-boxes, and more.  Truly, the ubiquitous nature of Android means it has become the de-facto standard by sheer number of devices.  And thus, Android runs on everybody and their brother's equipment.

However, it is a different mix today than in the 1970's when this was beginning.  Computing is everywhere - not just on your desk or in your lap, but on your wrist, in all your electronics, heck even woven into the fabric you wear.  And, Apple is now a huge, major player in all of these markets, even more so than in the days of the Apple II and IIe.  And Apple is one big, cohesive unit of consistency and connectedness, whereas Android is a scattered, confused mess of hardware manufacturers, versions, app stores, and more.

What It Means To Us

Does anyone remember what happened in the IBM PC days of PC proliferation?  What were some of the issues we had?  First, we had all kinds of compatibility problems - drivers, hardware, and more.  Software constantly crashed and malfunctioned because the underlying system was different than the one it was developed on.  Then, a watershed moment as computers got interconnected - viruses spread like a plague.  So now if you have a certain video card, or the wrong version of a driver, or some hard drive controller, or some modem, or mouse, or keyboard, or what have you - things didn't work right because it wasn't all compatible together.  It was the Wild West.  Personally, I hated "PC Compatibles" not only because they were the norm, but because they were boring, built with poor quality from hardware to software (and those few built with good hardware quality, had software and firmware issues - heck, they ran DOS and Windows!).

The great thing about the proliferation, is it led to cheap equipment - but the double-edged sword meant shrinking profits for manufacturers, and moving of jobs overseas.  Lots of variety, but we got bit by the lack of consistency.  In 1987, when I started selling PC's, the sales margin on a PC was from 30 to 45 points (or more perhaps).  In 1990, after I had gotten out, that had shrunk to maybe 10 points, and is now around 2-3 points.

Enter Apple.  What's different about what they did?  They moved their flagship systems to UNIX-based OS, they had expansion slots to add third-party hardware - how is what they did different?

Apple has throughout its history been criticized for a maniacal control of its environments.  From hand selecting components for the computers, to an ecosystem that closes out competitors, many have said that they did it for profits.  This is the end result, but not the primary goal.  As is evidenced over and over, a singular focus on profitability will lead you to stumble from the profitable path, because you lose focus on the basics of product quality and customer satisfaction.  Then, somebody comes in and undercuts you by price with the same product, and you are finished.  Kaput.  Brand loyalty doesn't exist, because it wasn't your focus.

Apple has consistently (since its rebirth after booting John Scully) kept every aspect of their products controlled in order to produce a consistent, reliable series of quality products that work together seamlessly.  Expansion is through industry-accepted standards (USB, SATA, Ethernet, SODIMM, etc.), or at least ones that they deem should be acceptable (Thunderbolt, Lightning, etc.).  Software remains tight with security to protect user's privacy - the company sells products, not subscriptions (yes, I know about iCloud storage and iTunes Match).  You buy an iPhone, you by an iPad, you buy a Mac, that's it.  They don't want your data - they don't want to snoop on their private, encrypted text messaging platform, they don't want to collect your usage statistics to sell to advertisers.  I know, as part of the Apple iOS Developer Program, I have not had any access to iTunes customer data to help me sell my apps.

With Android, everybody and his brother makes an Android device.  But, remember the fork?  Those manufacturers forked Android, what, 2 or 3 years ago to develop that new device you just got for Christmas or Hannukah?  And now, you get a new device with ancient technology that doesn't protect you from viruses.  Worse, you are an app developer - what version of Android do you target?  4.1?  4.2?  4.3?  4.4?  5.0?  What devices do you target?  You probably go with the big markets - Samsung, HTC, LG, etc.  That both limits you, and it makes your development and testing cycles much more complex.

And what are the results?  Let's look at one market, mobile smart phones.  A recent article shows that Apple, even though their global market share is around 25%, accounts for 90% of the profitability.  And it isn't like their products are that much more expensive than competitors - indeed, they are about the same cost.  Sounds like they are doing something right.

So, back in the day I was an expert, but I never did like the IBM PC (or the PC Compatible clones).  For the same reasons and more, Android has many drawbacks.  For a technical geek, it may be the cat's meow, because it is more open, more accessible to the inner workings.  But I am both a technical geek and a user, and reliability and security, coupled with a commitment to protection of privacy and personal information, are more important to me.  I don't need to get a remote terminal login to my phone - I'm cool with that!  And I certainly don't want an open messaging platform, where they glean everything from my text messaging (and everything else I do online) and use that to monetize my personal, private activities.