If it doesn’t say Kellogg’s on the box…

I recently bought a new Apple MacBook Pro. It’s faster, lighter, cooler, smaller than the one it is replacing, and all connectivity is via USB-C. This is a good thing. Like many, I was skeptical about the usefulness of the Touch Bar, but I’m now convinced.

It wasn’t cheap, but it doesn’t end there. I have been known to leave my power supply at home when visiting customers, so I like to keep a second PSU and leads in my computer bag at all times.

My new MBP has four USB-C ports and uses an 87W USB-C Power Adaptor. At £79, they aren’t cheap either. However, this does not include the required UCB-C Charge Cable, which costs a further £19.

There’s more. The PSU comes with a fused UK BS1363 adaptor. If you want the extension cable that used to be supplied with the previous generation products, that’s extra too.

So, I thought I’d shop around for the best credible deal I could find. But as I’d heard rumours of potentially dangerous, fake, knock-off Apple goods, I thought I’d play it safe by buying from a high street retailer.

I should have just paid the “genuine” price and moved on. Here’s what System Report shows for the PSU and USB-C cable that came “bundled” with the MBP bought from an authorised Apple Premium Reseller in town…

And here’s the same report for the PSU and USB-C cable I bought from an “other” independent computer sales and repair shop in another town…

Notice that the products show the same values for ID, Wattage, etc., but different values Serial Number, Name and Firmware Version. Indeed, the “other” PSU appears to have no serial number. More on this in a moment.

The two USB-C cables are the same length, but look and feel very different. With some googling around, I found that Apple had some issues with early versions of the cable

A limited number of Apple USB-C charge cables that were included with MacBook computers through June 2015 may fail due to a design issue. As a result, your MacBook may not charge or only charge intermittently when it’s connected to a power adapter with an affected cable.

Affected cables have “Designed by Apple in California. Assembled in China.” stamped on them. New, redesigned cables include a serial number after that text.

Here’s the printing stamped on my two cables (“bundled” left, “other”, right) …

So, at best the “other” cable is one of the ones which Apple would have recalled. But how else do they differ? Here’s what they look like, and how they weigh-in (again, “bundled” left, “other”, right) …

Although the cables are the same length, the “bundled” cable is thicker, heavier, and coils more tidily. But does this matter?

Well, some products sold as “genuine” certainly seem to matter in Apple . In October 2016, they filed a lawsuit after finding that 90% of 1000 “genuine” chargers they had bought from Amazon were fake. It also mattered to a blogger who nearly set his hotel room on fire with a fake charger.

Remember that both solutions claim to supply 86 Watts? Presumably the “other” cable is lighter because it has less metal. So is it actually able to safely supply 86 Watts all day and all night long? My “bundled” PSU doesn’t think so …

With the “bundled” PSU the MBP limits the current to 60 Watts with the “other” cable. This is beginning to sound like a good idea. Indeed, apart from generating the above screen grabs, I haven’t been using the “other” PSU or cable at all.

So, how can you tell that you’ve got a “genuine” Apple PSU? Well, without opening them, up (and voiding the warranty) it’s hard to tell, though someone has done so for other models.

But there are other ways to tell them apart …

In short, the screen printing is clearer, and the seams are tighter. In the last photo above, notice that the fit of the USB-C cut-away is pretty sloppy.

But there’s also one other major difference. The “bundled” PSU’s serial number is printed deep inside the power connector…

This seems  to be a new move by Apple, as previous PSUs had the serial number on a sticky label next to the retaining pin. So at best, I think my “other” PSU is an early edition. But then, it’s strange that it has no serial number at all.

The “other” PSU is also about 4 grams lighter than the “bundled” one. What only 4 grams? Yes, but without opening it up, it’s difficult to know, though fakers have been known to add ballast.

However, I do think there is another way to tell them apart. Ladies and gentlemen, I give you “The Knock-off Knock-off”…

I tried this test a number of times, switching the PSUs around in my hand. To my ears, at least, the “other” PSU (the second one) sounds a little more hollow than the “bundled” one.

When I first spoke on the phone (before visiting the shop), when three of us visited the shop, and in email correspondence since, the independent retailer has insisted that his products are “genuine”…

We only sell genuine parts, these are not retail boxed hence why the serial number does not match within an Apple store, as when you purchase retail boxed products, the large extra fee you pay is to allow you to be on their system and be able to take a faulty product back to any Apple store, anywhere in the world. If you are not happy with your purchase then by all means please bring back to us and we will refund you no problem.

I have since bought another PSU and cable in “retail boxes” from our local Apple Premium Reseller. Unlike the “other” products purchased from the independent retailer, the “retail boxed” and “bundled” products are identical in every detail (except for their serial numbers, of course).

One consolation is that the Apple charger is able to charge and/or power just about any other USB-C device (as well as iPhones and iPads via a UCB-C to Lightning cable). I’m convinced that being all-USB-C is a really good thing.

But I’m not sure what to do next. I’m NOT going to name and shame my source in public. I have to assume that he was acting in good faith. Perhaps I should just return the goods for the promised refund? However, I’d hate to think that someone else might end pulling 87 Watts through a substandard cable.

Suggestions?

Google Maps app for iOS thinks we drive on the right in the UK

On July 17th I upgraded my iPhone 4S and iPad 2 to Google Maps 2.0.14.10192. Apart from various bug fixes and UI improvements, the big thing for me was proper full resolution support for iPad.

Over the past few days I have used the traffic info in Google Maps to assess congestion on motorways such as the M6, M25 and M42 … and in each case, I have found myself in the middle of traffic that the iOS app displays a being on the other carriageway.

Today I was travelling West (aka South) on the M42. The Google Maps app clearly shows the traffic on the East (aka North) carriageway …

IMG_0171

Here’s what the much-maligned native iOS Maps app displays …

IMG_0172

It would appear that this latest version of Google Maps for iOS thinks that we drive on the right in the UK!

Beyond iostat 1 second samples with DTrace

I am a big fan of “iostat -x” for a first look at how a Solaris/Illumos system’s storage is doing. A lot of third party performance tools (e.g. Zabbix, BMC, SCOM, etc) also use iostat or the same underlying kstats. However, when a system has more than just a few disks/LUNs the stats become hard to read and costly to store, so the temptation is to sample them less frequently. Indeed, it is not unusual for iostats to be sampled at 15 seconds or greater.

Here’s an example of “iostat -x 15” output:

device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b
sd40      0.0    1.5    0.0   29.1  0.0  0.0    0.2   0   0
sd41     12.3    0.0 1735.2    0.0  0.0  0.1    7.0   0   9  
sd42     12.2    0.0 1539.1    0.0  0.0  0.1    7.2   0   9  
sd43      4.0   13.6  658.6 2004.9  0.0  0.1    2.9   0   5  
sd44      3.5   14.7  640.9 2126.1  0.0  0.1    3.1   0   6  
sd45      3.3   11.8  599.4 2097.4  0.0  0.1    3.6   0   5  
sd46      3.3   11.1  547.2 2248.4  0.0  0.1    3.8   0   5  
sd47      3.9   11.3  684.9 2248.4  0.0  0.1    4.0   0   6  
sd48      3.3   11.3  582.3 2097.4  0.0  0.1    3.8   0   6  
sd49      5.2    9.2  719.8 2119.8  0.0  0.1    3.7   0   5  
sd50      2.9    9.5  462.9 2119.8  0.0  0.0    3.7   0   5  

There’s plenty of useful information in there. However, the provided kstats do not differentiate between read and write latency. This means, for example, that many writes to a LUN which has a write cache could mask poor read latencies. So, for some time I have been using DTrace to provide IO statistics that look more like this …

    time         r/s     Kr/s     us/r         w/s     Kw/s     us/w
     115          66     7659     4354         257    33077     3535
     130          54     4753     5057         313    41208     4225
     145          81    10056     3986         263    30093     1028
     160          99    12923     4881         302    33146     5404
     175          44     4807     4161         318    41669     1638
     190          31     4017     3390         267    32117     1591
     205          40     4237     3782         267    31671     1186
     220         385    44153     7386         300    37612     9552
     235         325    38326     6620         252    29397     1217
     250          99    13185     6507         415    49924     9394

Here the basic statistics (IOPS, bandwidth and latency), differentiated by direction, are aggregated across a set of LUNs with the addition of a handy timestamp (in seconds). However, with a 15 second interval things are still pretty boring, so here’s how the “160” data looks with 1 second samples …

    time         r/s     Kr/s     us/r         w/s     Kw/s     us/w
     146           7      896     2449         186     3458     5980
     147          13     1416     3815          96      756    48367
     148         127    16319     3666           1        0       13
     149          42     5254     3738           0        0        0
     150           1      128      509           1        0       28
     151          15     1676     2522           4       16       30
     152          22     2816     3645          49      432     6385
     153          48     5920     6314           0        0        0
     154          23     2832    17770        3918   488337    18345
     155         306    37275     6016         138     2404      180
     156         473    62128     9277           0        0        0
     157           7      896      540         143     1631     1678
     158          24     2838     3176           0        0        0
     159          39     4637     4265           0        0        0
     160         347    48820     5523           4      165       67

Some will recognise the signature “breathing” associated with ZFS transaction group writes (i.e. when data from non-synchronous writes is periodically flushed to disk).

The system under test serves files via NFS over a 10Gbps Ethernet, and clients complain of occasional 100-500ms spikes in read latency. However, although the above 1 second samples show a small increase in mean read latency, it doesn’t look like much to worry about.

The system is connected to a few dozen fast drives via a 4x 6Gbps SAS link. The maximum theoretical bandwidth is, therefore, 24Gbps (i.e 3GBps). Again, the 1 second sample showing nearly 500MBps doesn’t seem that significant.

But now let’s go where iostat cannot, zooming in on the “154-144” interval with 10Hz sampling …

    time         r/s     Kr/s     us/r         w/s     Kw/s     us/w
   153.1          10     1280     3875           0        0        0
   153.2           0        0        0           0        0        0
   153.3          10     1280     5506           0        0        0
   153.4           0        0        0           0        0        0
   153.5           0        0        0           0        0        0
   153.6         100    12800     4829        7550   331625     3194
   153.7          10     1280     4082       13240  1993760    22143
   153.8          20     2560   136888        4150  2171420    82941
   153.9          40     5120    13776        1350   286795    73533
   154.0          40     4000     8753       12890    99770     1640
   154.1          80    10240     6460        1140    23800      879
   154.2          30     1450     6748         240      240      923
   154.3          20     2560     7926           0        0        0
   154.4         200    25600    11560           0        0        0
   154.5           0        0        0           0        0        0
   154.6        1420   179665     9317           0        0        0
   154.7        1300   151960     8097           0        0        0
   154.8           0        0        0           0        0        0
   154.9           0        0        0           0        0        0
   155.0          10     1280    10052           0        0        0

And there it is! In just 200ms we see 400MB written (i.e. at over 2GBps), at which point mean read latency peaks at 137ms. So ZFS’s transaction group writes appear to be starving reads, causing latency spikes invisible with 1 second sampling. We will investigate this further in a subsequent blog post.

Yes, there are many ways to have reach this point with DTrace, however this scenario does serve to demonstrate one of them. Perhaps the main point is that 1 second sampling hides far too much from view (and calls into question the usefulness of the much larger intervals favoured by so many).

Snow Leopard users, beware T-Mobile USB sticks!

I was attracted by T-Mobile’s new £15/month offer (first three months £10) with the new Mobile Broadband USB Stick 620 (capable of HSDPA 7.2). The box clearly states “Mac OS X v10.4.x or above”. However, when I installed the supplied software on my nice new Macbook Pro (which came with Snow Leopard, that is v10.6, installed) my system was rendered unusable on the next reboot. I am extremely grateful to David Glover for his workaround.

To get my machine back I had to …

1) boot in firewire target mode (hold down T while powering up)
2) attach to another Mac using a firewire cable
3) download the libcurl.4.dylib archive from David Glover’s post
4) install the above file in usr/lib/libcurl.4.dylib on the target machine
5) unmount the target machine
6) boot the target machine normally (it works)

But to get the T-Mobile broadband to work again I had to …

1) save a copy of the “good” libcurl.4.dylib
2) run /Applications/T-Mobile Mobile Broadband Manager/Uninstall_T-Mobile Mobile Broadband Manager.app
3) insert the USB stick
4) run the installer from there (I had previously used the CDROM that came with the stick)
5) copy the “good” libcurl.4.dylib back into /usr/lib
6) restart T-Mobile Mobile Broadband Manager

I have a call outstanding with T-Mobile (who were unaware of the problem), and will post an update as and when the fix the problem. It is astonishing that third party software should overwrite vital system files! As of now I don’t know what else they’ve broken, although I was alarmed to find other files in /usr/lib with the same timestamp …

$ ls -l /usr/lib/ | grep Feb
-rwxr-xr-x    1 pgdh  staff    163616 27 Feb  2009 bkLib.dylib
-rwxr-xr-x    1 pgdh  staff    179412 27 Feb  2009 libAgent.dylib
-rwxr-xr-x    1 pgdh  staff    208640 27 Feb  2009 libTinyXml.dylib
-rwxr-xr-x    1 pgdh  staff    522284 27 Feb  2009 libcurl.4.dylib.broken
-rwxr-xr-x    1 pgdh  staff     25464 27 Feb  2009 libmd5.dylib
$

More news as it happens.

“You know something about computers…”

I hear those dread words too often from friends and family. Despite my personal crusade to convert the world to UNIX — “Friends don’t let their friends run Winduhs” (TM) — the call is invariably a plea to rescue some dire Redmond-infected platform from oblivion.

And so it was that the door bell rang a couple of days ago. On the door step stoop a neighbour clutching an over-sized (if you sat in the middle of the keyboard, you probably could get the advertised 5.1 surround sound), top-of-the-range, Blu-ray-equiped, totally-plastic, ACER aircraft carrier. In fact, it was the very same laptop I helped setup a few months ago. And what a good thing it was that I’d taken the time to burn the three recovery DVDs, because some rascal had set a password on the internal SATA boot drive!

Once a password is set on a SATA drive, you’re hosed if you don’t know it. I phoned up ACER, who were very nice and picked up the call immediately. However, they such situations are outside of warranty, and that it would cost £50 plus the cost of a new drive to fix the machine. Googling around, I discovered HDD Unlock, which claims to be freeware. I moved the drive into a USB/SATA enclosure, but quickly discovered that HDD Unlock only works on directly attached IDE and SATA drives.

Dusting off a old XP machine that hadn’t been booted in years, I attached the drive and “Hey presto!” HDD Unlock said it could unlock the drive … for a fee. Normally, I take exception to those that take advantage of others in dire straits, but it seemed like a good deal: £16 to unlock a 320GB drive (the bigger the drive, the more you pay). Being a top-of-the-range computer, it had a pretty decent hard drive (WD3200BJKT), which would have cost around £60 to replace.

One PayPal transaction and 90 minutes later (the bigger the drive, the longer it takes), the drive was unlocked, and I was able to reinstall it in the ACER monster and complete a full factory fresh install from the media I’d previoiusly created. I only record this here, because you may know of someone in a similar situation, or you may be in such a situation, and if you’re in a situation like that …