Share it

Thursday, May 21, 2015

Neither Snow nor Rain nor Heat, But Maybe Cutbacks

Growing up, in history class we learned about the Pony Express, and the history of the US Post Office.  When we think of modern society and the integrity we have come to rely upon, there are many pieces (technological inventions) that hold that integrity in place - the invention of time zones so we can coordinate activities, of course all the communication technologies, but to me, it seems the bedrock of that integrity consists of a reliable postal service.

Over the years, the technology involved in delivering mail has become quite massive.  Remember those bar codes that helped the computers scan your letters and route them?  They scan at amazingly high speeds in massive sorting facilities, and that pretty much unerringly routes your mail.  Nowadays the technology is so advanced, it can scan printed and even written addresses without bar codes, and accurately route it (to the tune of hundreds of letters a second).

Banks count on it.  Mail order businesses count on it.  Utilities count on it.  Legal entities count on it.  Indeed, much of the pillars of our economy and society rely on the assumption that, if you drop a letter in the mailbox, it will arrive in a predictably short time to the destination, without fail (except in extreme cases, say if a mail delivery gets destroyed by disaster).

Yes, we all know that, so what? you may ask.  We just recently had a big family event, where we sent out invitations, got back RSVP cards, and sent out Thankyou notes.  This consisted of over 400 individual pieces of mail going back and forth over a few months of time.  And that's just orchestrated by us, one family.  Imagine the load the USPS has to deal with on a daily basis.

Now, with that large of a batch, you may not be surprised to find out that perhaps a letter got lost.  And historically, that has been our collective experience - occasionally, perhaps a piece of mail falls down in the delivery truck, gets missed, and ends up being late.  Perhaps a piece is destroyed, or damaged - but if damaged, the USPS still delivers with an apology.  However, we found that around 10 invitations never made it, and several more were delayed (that's not including the few returned for wrong address).  Several RSVP's didn't make it back (we confirmed by phone), and about 8 people mentioned they didn't receive Thankyou notes.  On top of that, several more local deliveries (anticipating 1-2 days) were delayed by more than a week (8-9 days to deliver). 

Adding that up, that gives around 20 pieces out of 400 that were lost, and several more that were delayed, say 25.  That's 6%.  And when we are used to relying on it to the point where we mail and forget, 6% is a massive number.  Multiply that out by the USPS daily volume.

Now, this is not our first time around the block - we had the same family event 3 years ago, during the exact same time of year, during which it never even occurred to us to doubt the reliability and integrity of the USPS.

When we asked the Post Office about this, we also found out something quite surprising.  All my life, we had known that the USPS operated a huge sorting facility in Troy, MI, where most area mail was routed through.  We found it had been closed for what she said to be almost 20 years, and that even more mail was now routed through a much more remote facility - in Ohio.  And this, she said, was the cause of why it now takes longer to send mail, and it is less reliable.

Unless, of course, you put tracking on it.  In which case it arrives 100% of the time.  And if you expedite, it arrives faster.  To me, this seems a blatant way of holding us hostage.  If you pay the ever-spiraling First Class postage rate, you basically get Third Class service - unless you also pay the exorbitant fees of a swindler to "guarantee" that your parcel gets there.  And let me tell you, we aren't going to do that on 400 pieces of mail.

So, I ask you, where is technology going with this?  Is the sending of physical written pieces of information becoming a thing of the past?  Do we now e-mail, hyperlink, e-fax, text or post what we want to send from now on?  Do we resort to non-Post Office couriers for packages?  Maybe the USPS has mismanaged its public trust, and maybe it's time to let it fail completely.

As a society, do we now rely upon invitation management web sites and e-mail?  What about those family members who are not so Internet savvy, who don't use smart devices, and who don't keep up on e-mail or social networking?  I feel cheated and betrayed, as if yet another pillar of the country and society we grew up loving and admiring is cracking, falling apart, in an increasingly apathetic and irresponsible public.  Is that just a feeling, or is there something to that?

Wednesday, March 11, 2015

ApplePay Insecure? The Deeper Truth

There are now a lot of "news" articles (such as this one in the LA Times) that make sensational headlines that Apple Pay is not secure.  Lately it came out that fraud rates with Apple Pay customers were "through the roof" - 6% as opposed to 1%.  At face value, it would appear that this means that Apple Pay is a much less secure form of payment than normal credit cards or other methods.  However, let's cut through the hype, the fervent Apple bashing, and get to the facts.

First, in September 2014 when Apple Pay was announced, Tim Cook claimed that it would be more secure.  So what exactly is the discrepancy?

The Old Way

To understand, we first need to understand the different payment systems, their vulnerabilities, and what is being done to combat the fraud.  Current credit cards employ one of 2 technologies:  a magnetic stripe containing card information, or an EMV (Europay / MasterCard / Visa) encryption chip.  With a magnetic stripe, if you want to pay for goods or services with your card, you have a choice of either swiping it on a reader, or reading the card information with your eyes (or from memory) to someone and having them input into a payment system.  A paper imprint with carbon paper is a tried-and-true 50-year-old method of taking payment information as well.  With EMV, when the card is scanned, the chip generates a per-transaction code that can only be used to issue payment to that merchant for that transaction.  Your personal information (card number, name, expiration date, security code, etc.) are not communicated to the merchant.  However if, say, you are paying over the phone or Internet (not in person), then you have to resort to Plan B just like the magnetic strip.

So, what are the vulnerabilities in this old system?

  1. Point of Purchase Interception
    1. If you have a dishonest clerk, or a fake card scanner attached on top of the real one, they can grab your card info and record it for later fraud.
    2. Many card scanners are done over unencrypted, open phone lines that can be tapped or intercepted.
    3. Many card scanners are done by computer, over the Internet, and can be intercepted.
  2. Information System Hacking
    1. The vast majority of large businesses, whether online or in-person purchases, enter and store your card information into their database.  This means that a hacker who breaks into the system can glean thousands or millions of contact and payment information from databases, over the Internet.  Many of these systems do not have intruder detection, so it is certainly possible that at least half of these thefts go undetected or unnoticed.
  3. Card Issuing Bank
    1. If a thief steals not your card information, but personal identification information (your name, social security number, address, bank account numbers, employer, utility account numbers, etc.), they can pose as you and open a new card that you don't know about, in your name.  With the statements sent to your address.  And run up charges that you are billed for.
Now we know this, but what has been done to combat this fraud?  Every card issuer is required to implement fraud monitoring, that learns your purchase patterns, and attempts to detect purchases made out of pattern - thus alerting people and possibly disabling the card.  Many of us have experienced going on vacation, and having our card stop working because the fraud monitoring system detects out-of-normal behavior.  However, thieves can also gain information on your spending habits (what areas are your purchases made), and sell that information to card buyers who use those cards in those areas, thus avoiding fraud monitoring for a period of time.  But these efforts have held the fraud rate at about 1% (about $1 for every $100 spent).  That is actually pretty high if you think about it - and especially in the United States where merchants are (not yet) required to provide EMV card readers.

The Apple Pay Way

So now, how does Apple Pay work?  With Apple Pay, they employ the EMV algorithm in the device (iPhone, Apple Watch, iPad, etc.).  But this is used both for online (via in-app) and in-person purchases (via Near Field Communications or NFC).  And, in order for you to use your device to pay, you must scan your fingerprint.  So, for payment, it is definitely more secure.  So why is there so much fraud with Apple Pay?  Now, the vulnerability comes to light:

  1. Card Issuing Bank / Apple Pay Trust
    1. In order to register a card with Apple Pay, Apple came up with a system whereby the card holder takes a photo of the card with the phone, then has to go into the bank to have them confirm identity and that the card holder is actually the card holder (physical presence to a physical bank employee).  Alternatively, a cardholder's device (iPhone) can be registered via Apple, and a secure iMessage sent to the previously-configured trusted device to confirm identity.  However, most banks balked at the "overhead" that would impose on their customers, and opted for a less secure authentication via phone.  A bank operator talks to the "cardholder" (hopefully it really is that person) and asks some "personal" information to confirm s/he is the cardholder, to allow that card to be registered with the device for Apple Pay.
As you can see, it is impossible to intercept and make any useful information during point of transaction.  Nor is any card or cardholder information made available to the merchant, so there is nothing insecure to store in a database that can be hacked.  However, you can probably spend about 2 seconds and think of a way around this - and the vulnerability has nothing to do with Apple.  The thieves have done nothing but think of this, so they saw this hole as well.  Through the readily available methods of obtaining personal information, the thief can register the card with their own Apple Pay through the bank if they have the card information, yes, but that is the hard way and probably may trace back to them through the device.  Easier still, is to use the stolen identity information to open up a credit card through an issuing bank, one the person whose identity is stolen knows nothing about, and then register that card with Apple Pay.  And that is exactly what is happening.

What are the methods of mitigating this?  If you have banks with lax procedures or standards for authenticating their customers (which many have), then there is very little that can be done.  Certainly you can be alerted if you monitor your credit report on all 3 bureaus, and perhaps pay for a monitoring service like Life Lock, but that actually takes some time before it makes its way through the system and becomes an alert.  And, you have to pay attention to the alerts, make a determination if the alert is because you actually did open up an account (a false alert), or if it was someone else on your behalf (a true alert).  This alert monitoring system is not as immediate or mature as the fraud monitoring in place for the old card methods - as soon as a purchase is made, the computer can flag it and call you or disable the card.

So, let's compare apples to apples (yes, pun there).  It is actually misleading to claim that 1% fraud rate, because it is a percentage of a different means of theft (card theft, not identity theft).  If you use the same measure, Apple Pay would actually be 0% - none of the Apple Pay cards are stolen.  So what is the identity theft rate without Apple Pay, of thieves opening accounts in a stolen person's identity?  That information I haven't seen.  And that is the rate you must compare to the 6% they are claiming for Apple Pay.

Tuesday, March 10, 2015

Network Overload

The total plethora of different flavors of social networking have my head spinning.  I mean, you've got Instagram.  What's that?  You take a pic, share it, comment on it.  OK, so how's that different from Facebook or Twitter?  And it seems all the other things that have popped up are some quirky flavor of, basically, texting.

So, I ask you - the general readership.  What do we need?  What do I use?  Facebook, Twitter (kind of), LinkedIn, and Google Plus (kind of).
  • Facebook - why do I use it?  Because that's where everyone is.  Think Myspace, but nowadays.  (Or maybe Myspace is a dated reference - who uses that anymore?) I would be perfectly happy to drop it completely, but most everyone has a FB account.
  • Twitter - I don't use it, but I do "cc" tweets via a multi-network publishing service.  Why?  Because I'm a tech geek, so I must be on twitter.  I do get Twitter's advantages, its value proposition if you will.  But it's not for me.
  • LinkedIn - it's a business Facebook.  That works.  Everyone is on it, so I'm on it.
  • Google Plus - Like Twitter, I really don't use it.  But I do.  It's the platform I'd like to use instead of Facebook, but let's face it, not everyone's on it.  But it's too important to ignore.
  • Microsoft Skype - I use Skype for voice/video, so it has a chat feature that is convenient at times.
  • Microsoft Lync / Messenger / Communicator (whatever you want to call it), we use it at work.  That's it.
  • iMessage - I use heavily, since a LOT of people have Apple products.
  • SMS - I use for mobile texting (and from my PC)
Then, we have sooooo many others who seem to be popular, seem to be doing well (as businesses, as startups), but where I totally fail to see any value they bring, or any hope of lasting beyond the whim of the moment.  Tell me I'm wrong, but I just don't get it.

Snap Chat - Instagram that disappears seconds after you see it.  So?
Fire Chat - some loopy chat program that links up by daisy chaining nearby so you don't have to have cellular, WiFi, or Bluetooth, but at least one, to be able to "tweet" anywhere in the world.
Salesforce - yeah, there's a chat in Salesforce.  Why??  Seriously, like we don't have anything else?
Yammer - some LinkedIn wannabe.

So, there is a lot of yammer out there - a lot of noise.  But what value is it bringing our lives?  Twitter, it seems, is the way to disseminate news - especially from suppressive societies.  Facebook is the premier socialization network, with Google Plus a runner up.  And LinkedIn is the premier business network.  Other than that, what else should there be?

Now I don't want to suggest that we stifle innovation, or that we close off any other possibilities.  However, how much can we really take?  What fulfills actual need?  Do we really need "After School" to be abused and contribute to anonymous threats, intimidation, and bullying?  I leave it to you, the using public, to decide.  How many networks do we need to be in?  Should we be in?  To me, it seems totally crazy to get embroiled in so many that I spend my real life digitally interacting instead of dealing with the people important in my life, who are sitting right next to me at the table or on the couch.  And I find it disgusting and reprehensible to have people over for a holiday celebration, and have them be "networking" on their mobile devices.  I'm thinking, check the mobile devices in at the door ("the Sheriff don't allow no weapons on this here premises, leave your guns at the door!").

Thoughts?  Criticism?  Evangelizing for a little-known networking platform?  Let me know.

Wednesday, March 4, 2015

Google is the new IBM

A Tangent

This is the 128th post on my blog, and as anyone in computer science knows, that is significant.  128 is 2 to the 7th power, or exactly half of a Byte (256).

The History

It has happened again - the world has gone full circle.  The world of computing, that is.  In the 1980's, as we moved from the IBM PC and Apple II into a full-blown explosion of the ubiquity of computing, IBM and Microsoft developed what became a baseline for computers.  We used to remark that "everyone and their brother" had a brand of computers, and there were a plethora of components so you could build your own.  Every DOS- and Windows-compatible computer was called an "IBM Compatible," but just as often it was also called an "IBM PC" - even the ones made by others (Brother, Epson, Gateway, and a thousand other names).

How did this come about?  Microsoft was just beginning, when Bill Gates bought the rights to DOS.  He then held an exclusive copyright (vigorously defended), and sold usage rights to hardware manufacturers like Tandy, IBM, Compaq, and others.  Since IBM was the biggest company at the time in that field (and a pioneer and innovator in the hardware), it became the best-known brand name.  Note that Apple devised the first "all-in-one" computer - screen, keyboard, CPU, all together instead of a separate kit components.  This was Steve Jobs' invention, and it revolutionized personal computing, but IBM perhaps more than anyone capitalized on it.

(Side Note) Ironically enough, at the time the reason IBM developed the Personal Computer, was because they felt that they needed a product that would prove, by how limited it was, that the only way to do computing was to get a real computer - a mainframe.  The PC would be a showcase in failure, and then when it did, it would morph into a terminal for the mainframe (and AS/400 mini).

As prices came down due to technology and manufacturing improvements, and competition with a myriad of manufacturers - the public bought into small-system computing in a big way.  IBM was a household name even before this revolution (think typewriters - before our first computer, we had an IBM Selectric that I spent hours hammering out my various writings on), so they cemented their name during this phase.  Everybody and their brother was licensing technology not just from Microsoft, but also from IBM - as they patented many inventions and advancements in personal computing.

Along came Google - a company named after the mathematical number named by the mathematician's kid's babbling toddler word.  The search engine company who was really the advertising company.  We all know their story, they got huge, fast.  Branch into web browsing, and now the mobile smart device market opens up.  Apple revolutionizes yet another market, and Google decides to venture into it with a mobile OS - Android.

Now, the various aspects of Google come together - web browser, search engine, advertising, and a mobile OS.  This is a massive synergistic environment designed to make money off of finding your personal habits and selling that information to advertisers.  But like smart cookies (yes, pun intended), they make their Android OS open-source.  This follows in the long tradition of UNIX, developed by AT&T Bell Laboratories (think Dennis Ritchie), and initially made open source.  Many different branches of UNIX abound, and indeed, a large sub-branch from the main trunk that re-opened the "open sourceness" of UNIX - Linux.  At this point, it's fair to say that thousands of different flavors of UNIX are in continuous use today - if you include all its derivatives like all the Linux branches, Mac OS X, and derivative of OS X - iOS, CarPlay OS, and whatever they call the Apple TV OS.

So, Google has entered into the fray with a very successful mobile OS, Android.  Many other people have "branched" or "forked" it by downloading the source code, modifying it, and building their own version of Android.  Now, various downstream versions of Android are running on a plethora of machines - not just phones and tablets, but also embedded devices, set-top-boxes, and more.  Truly, the ubiquitous nature of Android means it has become the de-facto standard by sheer number of devices.  And thus, Android runs on everybody and their brother's equipment.

However, it is a different mix today than in the 1970's when this was beginning.  Computing is everywhere - not just on your desk or in your lap, but on your wrist, in all your electronics, heck even woven into the fabric you wear.  And, Apple is now a huge, major player in all of these markets, even more so than in the days of the Apple II and IIe.  And Apple is one big, cohesive unit of consistency and connectedness, whereas Android is a scattered, confused mess of hardware manufacturers, versions, app stores, and more.

What It Means To Us

Does anyone remember what happened in the IBM PC days of PC proliferation?  What were some of the issues we had?  First, we had all kinds of compatibility problems - drivers, hardware, and more.  Software constantly crashed and malfunctioned because the underlying system was different than the one it was developed on.  Then, a watershed moment as computers got interconnected - viruses spread like a plague.  So now if you have a certain video card, or the wrong version of a driver, or some hard drive controller, or some modem, or mouse, or keyboard, or what have you - things didn't work right because it wasn't all compatible together.  It was the Wild West.  Personally, I hated "PC Compatibles" not only because they were the norm, but because they were boring, built with poor quality from hardware to software (and those few built with good hardware quality, had software and firmware issues - heck, they ran DOS and Windows!).

The great thing about the proliferation, is it led to cheap equipment - but the double-edged sword meant shrinking profits for manufacturers, and moving of jobs overseas.  Lots of variety, but we got bit by the lack of consistency.  In 1987, when I started selling PC's, the sales margin on a PC was from 30 to 45 points (or more perhaps).  In 1990, after I had gotten out, that had shrunk to maybe 10 points, and is now around 2-3 points.

Enter Apple.  What's different about what they did?  They moved their flagship systems to UNIX-based OS, they had expansion slots to add third-party hardware - how is what they did different?

Apple has throughout its history been criticized for a maniacal control of its environments.  From hand selecting components for the computers, to an ecosystem that closes out competitors, many have said that they did it for profits.  This is the end result, but not the primary goal.  As is evidenced over and over, a singular focus on profitability will lead you to stumble from the profitable path, because you lose focus on the basics of product quality and customer satisfaction.  Then, somebody comes in and undercuts you by price with the same product, and you are finished.  Kaput.  Brand loyalty doesn't exist, because it wasn't your focus.

Apple has consistently (since its rebirth after booting John Scully) kept every aspect of their products controlled in order to produce a consistent, reliable series of quality products that work together seamlessly.  Expansion is through industry-accepted standards (USB, SATA, Ethernet, SODIMM, etc.), or at least ones that they deem should be acceptable (Thunderbolt, Lightning, etc.).  Software remains tight with security to protect user's privacy - the company sells products, not subscriptions (yes, I know about iCloud storage and iTunes Match).  You buy an iPhone, you by an iPad, you buy a Mac, that's it.  They don't want your data - they don't want to snoop on their private, encrypted text messaging platform, they don't want to collect your usage statistics to sell to advertisers.  I know, as part of the Apple iOS Developer Program, I have not had any access to iTunes customer data to help me sell my apps.

With Android, everybody and his brother makes an Android device.  But, remember the fork?  Those manufacturers forked Android, what, 2 or 3 years ago to develop that new device you just got for Christmas or Hannukah?  And now, you get a new device with ancient technology that doesn't protect you from viruses.  Worse, you are an app developer - what version of Android do you target?  4.1?  4.2?  4.3?  4.4?  5.0?  What devices do you target?  You probably go with the big markets - Samsung, HTC, LG, etc.  That both limits you, and it makes your development and testing cycles much more complex.

And what are the results?  Let's look at one market, mobile smart phones.  A recent article shows that Apple, even though their global market share is around 25%, accounts for 90% of the profitability.  And it isn't like their products are that much more expensive than competitors - indeed, they are about the same cost.  Sounds like they are doing something right.

So, back in the day I was an expert, but I never did like the IBM PC (or the PC Compatible clones).  For the same reasons and more, Android has many drawbacks.  For a technical geek, it may be the cat's meow, because it is more open, more accessible to the inner workings.  But I am both a technical geek and a user, and reliability and security, coupled with a commitment to protection of privacy and personal information, are more important to me.  I don't need to get a remote terminal login to my phone - I'm cool with that!  And I certainly don't want an open messaging platform, where they glean everything from my text messaging (and everything else I do online) and use that to monetize my personal, private activities.

Friday, January 30, 2015

Windows 10 - Are You Impressed?

On January 21, Microsoft held an event to introduce in detail what they are doing with Windows 10.  Now, when I first saw the intro of Windows 10, I have to admit that I was not impressed.  Now you may think for some strange reason that I am an Apple Fanboy, and I hate Microsoft.  Far be it!  I just hate Microsoft products, and I love Apple products.  Microsoft, indeed, has done much to make Apple what it is, including (and not limited to) making the first productivity software for the Mac, bailing them out from bankruptcy and investing in Steve Jobs when the company was going solvent, and lots more.

However, I have to say that what I have seen now on Windows 10 has me thinking about my old maxim "I will never buy another Windows PC again."  Yes, you heard it right - I'm rethinking that a bit, and here's why.  I'm going to run down what I've gleaned from the various Windows 10 presentations, and give you my reactions:
  • Unified Operating System across devices
    • One Windows 10, across phone, tablet, and PC.  What am I thinking?  Bloatware.  Microsoft bloatware.  Jangled user experiences.  But how have these recent presentations changed that?
    • First, in seeing the implementation across devices, it is pretty good.  Not great, not drop-everything-and-go-out-and-buy-all-Windows great.  But pretty good.  Microsoft is really trying.  I stopped by the Microsoft store a couple weeks ago, and I really tried for about 20 minutes to use the Surface Pro.  It was NOT intuitive, and I finally gave up trying to figure out some things.  Me, a long-time tech expert.  But that's Windows 8.1, and so maybe 10 is a bit better for mobile platform, but not much.
    • Universal apps is not a new idea, nor is it groundbreaking.  If they can pull off device-based separate user experiences for the same apps, well, that would be very cool and groundbreaking.  But then again, that makes the code bloated, and I definitely do NOT trust Microsoft to pull it off - well.
  • Single App Store
    • Like I trust Microsoft to handle privacy and security?  Yah, right. Do they take Apple Pay?
  • Synched Life
    • What I saw from January 21, 2015 - calendars in sync, e-mail in sync, photos - and more, these are Beta, pre-release...and yes, those features and more have been out on Mac and i-Devices for years.  Come on!
    • Wirelessly stream and print and such from a mobile device.  Hmm, have I seen that before?
    • Finally, they are catching up to where Apple was 5 years ago.  With some thought to it, for sure - I give them an A for effort.
  • Project Spartan - a "new" web browsing experience
    • Microsoft claims that this is totally new, but let's examine what they have done.
    • Pared down, simplified UI - just like Chrome and Firefox and Safari.  For years.  That's new - to Microsoft...
    • 3 new features:
      • Note-taking - so now you can annotate (draw and mark up) the web directly, graphically and textually.  Interesting, I don't really know how useful that will be.  It may be very cool, it may be a very small niche.
        • Clipping and saving to One Note - finally Microsoft has caught up with Evernote.
      • "Focus on the action of reading" - They added a reading mode to reformat web pages to be more readable.  And added a reading list into the core experience.  Hmm, just like Apple did in Safari what, 5 or 6 years ago?  Groundbreaking...
        • Built-in support for PDF files.  Wow...(no sarcasm here!)
      • Personal Assistant - building Cortana into Spartan, via the search field.  Hmm, just like Apple did with Safari what, over a year ago?  Groundbreaking...
        • What is cool is the use case he used as an example, because Cortana is tied into his schedule, and the things he is tracking - it knows he is tracking his wife's flight - as he searches for things on the web pertaining to schedule, it could let him know whether or not she can make it based on her flight arrival time and drive time.  Frankly, I don't know how much intelligence this takes to figure out the connections to what you are doing and your life, whether or not it will throw up loosely-connected or uselessly-connected info (noise), or filter that and actually give you useful information.  Based on what Microsoft has done guessing how you want to format things in Word, I think it's a non-winner.
    • Don't get me wrong, the new browser is decent, but it's no more decent than other browsers have been for years.  And indeed, when you take the entire Mac / iOS ecosystem, it still lacks.
  • Gaming
    • Interesting that Gaming is a focus for Microsoft, and Apple also introduced a new development kit for developing mobile gaming.  Obviously a lucrative market.
    • My Games, and a Friends list.  Hmm, sounds like Game Center introduced in Apple iOS...Except, you can voice- and text-chat across all platforms.  Now that's useful!
    • Activity Feed - this is cool, you get updates from games you play, kind of like a Facebook feed.
    • Integrated into Windows, a Record feature to record what you are doing, ad-hoc, and share.  That is great!
    • Just like Apple's introduction of Metal, DirectX 12 gives developers more control and lower-level access to the hardware.  And compared to DirectX 11, the same functions cut power consumption in half.
    • Streaming of XBox One games to PC's and Tablets - now that is cool.  Definitely doesn't make me want to buy a Surface Pro, but it does make a Windows PC more appealing.  Perhaps running on a Mac?
    • Bringing apps over to the XBox One TV screen - now it is possible to run a PC app on an XBox One.  Again, I don't know about the interface experience, but it brings up some interesting possibilities.
    • Microsoft has definitely innovated and expanded in the gaming sphere.
  • Hololens
    • Folks, now this is groundbreaking.  Again, no new ideas here - but the implementation of it is absolutely breathtaking (caveats below).
    • If you want to know what Hololens is, have you seen the Iron Man movie?  Where Tony Stark (Robert Downey Jr) has a holograph of his Iron Man suit in front of him, and he grabs things with his hands - moves them around, builds the design in 3D in front of him?  That is Hololens.  Yes, the full thing!  In reality.
    • The way it works is it is a PC that you wear on your head.  It has a visor that goes over your eyes, on which it projects "holograms" on top of what you see around you, that you can interact with.  You can do 3D design with virtual holograms, and even then print them to a 3D printer.  WAY COOL.
    • You can grab a rectangular wall hanging, say a picture, and make it into a Netflix screen in your visor.  WAY COOL.
    • You can throw a Skype call floating in midair as you walk, and a Word document floating in front of you, and so on.  It works with all Win32 apps (OK, really, haven't they given up 32-bit architecture altogether like Apple did 3 years ago????).  WAY COOL.
    • The possibilities are limitless.  Imagine walking into a store, and if you are wearing your Hololens, you can see and interact with special advertisements, info points, even products they don't have in stock but you want to know more about before you order.
    • And the coolest part of this is, the Hololens support is built into Windows 10.  So all your apps function within the Operating System, and work in the visor.  Of course, if you haven't written your app to take advantage of the 3D touch and voice interaction, it will be at a basic level - you are interacting with the old style UI using gesture paradigms, but it will still work.  You can throw up a window of your 1990's app floating in virtual space, or tacked on a table in front of you, and use it with your finger as you would have with a mouse.
    • Now, the caveats:
      • What, you want more EMF's, more WiFi and Bluetooth right around your BRAIN?  Sounds like Microsoft has invested a lot of money in the Cancer Treatment Industry...
      • Windows is still Windows, rife with lack of security (how exactly did they pass DOD security standards?  Nepotism?), the largest target of viruses and hackers.  It seems the more computers become personal, the more interactive they get in our lives, the greater the possibility and consequences for destruction.
      • Has anyone used Kinect?  Ford Sync?  The voice and visual interaction that Microsoft has put together have been sketchy.  It doesn't work very well.  "The devil's in the details," and folks, Microsoft is not a detail company given its past history.  I don't trust Microsoft in the execution of those details, and so I don't trust the experience of Hololens will live up to the marketing and hype.  I have seen the opposite with Apple - they market and hype, and the products over-deliver.  As cool and awesome as this is to think it is here, now, and we can start living the Sci Fi dream, I don't think Microsoft is the one to fulfill on that dream.
      • "Cortana is scouring the Internet, learning things about me, about the places I am going, she's getting smarter all the time."  Hmm, and Microsoft is a company known for their fierce protection of customer privacy??????????????  A company that is safe from hackers???????????  And wouldn't cave to a Federal subpoena for personal information (which, by the way, Apple has stated they don't store on their systems, but I bet you anything that Microsoft does)?   And have that tied to Bing, which is their advertising business ala Google?  No thanks, if the product really does worm its virus way into my life, I wouldn't give it to Microsoft.  Guess if I do use Windows 10, I'm using Firefox or Chrome (unless Apple comes out with the current Safari for Windows).
  • Upgrade Pricing
    • Here again, Microsoft is not a leader.  You can upgrade from Windows 7 or 8, to Windows 10 - for free!  Well, for a year.  And then I expect it will be back to business as usual.  Outrageous prices for a mediocre product.
    • They have said nothing about the Server line of Windows products, so the corresponding server version I don't know what their plans are.
    • Although it may not seem like it from reading this post, I am very impressed with Windows 10. For the first time in the history of Windows, Microsoft has innovated - taken the leap to the next level of evolution.  For the first time, Microsoft has not only taken steps to bring their browser on a par with the competition, but even to try to bring it ahead (although I'm skeptical it will be that useful).
    • I hope the security and stability have improved - definitely Windows 7 has been a very stable platform, but as I've said before, avoid every other release of Windows because it is unstable.  That doesn't mean you shouldn't test out 10 (aka 9), but certainly I would not advise a business customer to wholesale upgrade the corporate computers to it.  No way.
    • Microsoft has made a huge leap into the connected digital life, both with a strong focus on private and corporate usage.
    • Apple and Microsoft now have compelling product ecospheres, although Microsoft's is still pretty far behind in the breadth of what it addresses.  I believe that this will be a strong offering for Microsoft to build on for future business, while Apple will remain strong.  That would seem to leave Google pretty far behind, as Chrome OS has seen pretty weak adoption, and the Google Chrome / Android / Glass / Smart Watch ecosphere leaves a lot to be desired, with a very weak cohesive vision.  I think Google will see continued declining market share in the mobile world, and Microsoft will gobble up the lion share of its losses.

Friday, January 16, 2015

The Breakdown of Journalism Standards in an Online World

Today news flows so fast we need computers to keep up with it. Reports, blogs, news articles and more are available as events occur or soon thereafter. But has this helped? I'm a single blogger editing my own material, and it's sad to say that my articles have better grammar and spelling than the big news organizations.

Daily, I consume a barrage of feeds - from professional and semi-professional organizations (think NPR, Mobile Nations, local news from WXYZ the ABC affiliate, and more).  And daily, I am hit by constant misspellings, word exclusions, improper word usage, and more - enough to give an English teacher a heart attack.  In fact, I don't think I've read a single article, no matter how short, in years that didn't have at least one issue.

But let's take things into perspective.  Throughout the what, 1300+ year history of the modern English language (and any language for that matter), the language was fluid not just from generation to generation, but from day to day.  As people became literate, there were many different ways to spell (and pronounce) the same word, depending on whom they were addressing in the letter.  It was only as dictionaries were invented, and then computers, when the modern concept of "proper" spelling came along.  In the 18th and 19th centuries, American "dictionaryists" became enamored (enamoured) with reworking spellings to be more consistent, and reduce nonsensical letter combinations with single letters (like tung instead of tongue).  Some caught on, some didn't.  We don't send people to gaol, we incarcerate them in jail.  We don't colour our paper, we color it.  But tung never caught on.

Of course, when computers came of age, the concept of "correct" spelling and grammar were firmly entrenched - because computers LOVE consistency.  Spelling and grammar rules can be programmed, and enforced.

So, we have English that grew up with its rules on spelling - then we have American English (after the efforts of good old Webster to simplify the spellings) who has a bastardized set of rules, let alone a mishmash of words from other languages and cultures mixed in.  For some reason that I cannot explain, my brain has grasped the language, and the nuances and expressions thereof.  But for many others, I know it is difficult to pass (let alone excel) in an English class (American or the Queen's).

And still, I had grown up with a firm sense of that consistency as "right."  The right way to write, the right way to spell.  And, I had thought that journalism as an industry was the epitome of this.  Of course, in my time, newspapers, books and magazines were the only way to get your printed readings.  Now, there are thousands of sources, and mostly digital.  The pace has picked up - and the quality has suffered tremendously.

Why is the written language so important?  As we all know, Human communication is difficult enough by default.  Typos, grammatical errors, exclusions and word rearrangements (common in todays online publications) can lead to misunderstandings, or even mangled meanings.

Now, as annoying as the "correctness" of the written language, what also bothers me about the state of journalism in the digital age, is the immediate, rawness of it all.  Even though much of the professional publications are "edited" and "vetted" before publication, truly I argue that the quality of the editing is sadly inadequate to yesteryears' standards.  I would imagine editors of times past would be quitting on principle if they saw the quality of material produced in copious quantities today.  As they used to say in the show Farscape, what a bunch of dren.

Am I the only one this bothers?  Am I going to be rolling in my grave with nobody else but Andy Rooney (think the episode with him and - booyakasha - Ali G)?

Conversations with Siri #10