Economic Red Herrings (again) - ugh

I hate economic red herrings.

Take this article concerning so called “Ideal CEO to Worker Pay Ratios” – it basically makes the case that CEOs make too much in relation to the workers and that the solution is some combination of reducing CEO pay and increasing worker pay. To emphasize the argument the article paints the data within the context of income inequality, worker morale, etc.

Here is the thing:

The article’s author and the folks involved with the studies the article is based on are focused on the wrong optics as opposed to, well, reality.

For example, let’s take Wal-Mart (because they’re an easy and evil target) –

According to this the CEO of Wal-Mart made $25.6 million last year, and when we think about how little Wal-Mart workers get paid we think: “it’s the CEO and his damn salary!” I mean isn’t that always the argument? That the CEO’s huge salary is coming at the expense of the workers?

It seems that way, but the idea falls part when you employ 3rd grade math:

Wal-Mart has 2.2 million employees, thus making the CEO’s salary $25.6 million about $11.60/employee a year.

Even at a company with 220k employees a CEO who is paid $25.6 million is only making $116/employee per year.

Maybe the CEO to Worker Pay ratio isn’t all that important, or at least, the reason a group of workers is underpaid. At the end of the day capping CEO pay isn’t going to free up cash to pay workers, it’s just going to give the people who rail against CEO pay fewer things to write about.

Ultimately the problem isn’t the CEO’s pay vs. worker pay, it’s the relative value of worker pay to living expenses.

The other piece is that you can’t just “peg” an idea salary for each worker based on the CEO to worker ratio, because there is only ONE CEO and in some cases MILLIONS Of workers.

The average Wal-Mart makes $12.83/hour or $26,686.40/year, which means the CEO earns about 959X what the average worker makes. Per the article the ideal ratio in the US is 6.7x, so if we divide 25.6 million by 6.7 we arrive at the average Wal-Mart worker “should” make $3.8 million/year 

It’s a nice number that some well meaning blogger will run with, but, well, paying 2.2 million people an average salary of $3.8 million would cost about $8.4 TRILLION AKA roughly ½ of America’s 2013 GDP. 

So the question offered is: what are even doing here? Why did a bunch of academics spend so much time gathering data to make arguments around CEO pay caps or raising worker wages, when any 3rd grader who can do long division could tell you that these numbers don’t actually matter to real people?

That time could’ve been better spent trying to figure out ways to make those companies more efficient, or some sort of pricing and efficiency model that would free up cash to pay people more.

You know, the things executives do in real life when they want to free up cash to invest in their businesses and/or raise wages

Fantasy articles may get clicks and stir things up, but they don’t put forth any real solutions. 

Realistic Minimum Wage Solutions

Note: This is a remix & blend of two earlier posts on minimum wage. 

I have mixed feelings about the current minimum wage discussion. On one hand I don’t have a problem with raising it, on the other I’m not sure if $15/hour is realistic and I definitely think a lot of the arguments people are making aren’t grounded in hard data. Simply put: if you’re going to tell businesses they can easily afford to pay a certain amount in wages, you should at the very least have their balance sheets in front of you.

Locally where this conversation really started to peak my interest was when I saw memes going around discussing the fact that Washington State leads the nation in job growth AND has the highest minimum wage in the country. It’s a great story; unfortunately it’s more than a little intellectually dishonest at worse or a case of confusing cause and effect at best.

Looking at this map of Washington State’s unemployment rates tells you the real story: King and Snohomish Counties (Seattle area) have low unemployment rates, and the rest of the state has significantly higher ones. Furthermore, if you dig into area (and county) data you see rising unemployment outside of Seattle.

One has to only drive around Eastern Washington, the Olympic Peninsula, or really 60-90 minutes in any direction outside of Seattle to understand that economically there are two Washingtons. A highly affluent one, and another that straddles the worlds of barely middle class and low income. 

On one of future wifey’s and I’s “Wilderness Trips” around the state, we found ourselves in a town that had more bail bondsmen than restaurants that were still in business. It’s worth noting that we were only about an hour away from Seattle by Ferry. 

Job growth in WA is really a story about the growth of the tech industry in Seattle, as the metro area has seen a 12% growth in tech jobs over the past two years (#1 in the country), and 43% in tech employment from 2001-2011. All of those high paying jobs and the requisite spending on cars, real estate, groceries, restaurants, etc., is what is driving Seattle’s job growth.

It’s worth noting that over the 2001-2011 time period tech employment grew by 43%, and the median income of the Seattle area rose by 33%. Meanwhile during that same period of time, median household income was dropping in the country overall.

Job growth in the Seattle area isn’t a story about the success of a higher minimum wage; it’s a story about Microsoft, Amazon and a high-income city driving economic growth across the board. A city whose residents can absorb the price increases.

A $15/hour minimum wage would probably work in Seattle, but how is that going to play in cities where people don’t earn as much?

To put it simply: the median income in Seattle is $63k, while the median income in cities like  Cleveland, MemphisSt. LouisMilwaukee or Syracuse is between $25-$35k.

Can we really talk about a $15/hour minimum wage in Seattle and Milwaukee as if the impact, and ability of local businesses to pay and the market to support it are the same?

Again, Seattle’s median income is about double the median income in many midwestern and southern cities, AND Seattle has multiple close in suburbs/small cities with populations approaching 100k & median incomes in the $85-$100k range.

This isn’t about politics, ideology or being “for the worker” per se, it just comes down to the financial realities of making minimum wage less than 1/2 the median in one city ($15/hour = $31,200) and +/- 10-15% of the current median in another. 

I.e. you can’t argue a mathematical situation with ideology, nor can you compare past minimum wage increases of 10-20% to an increase that it’s in the 40-90% range (raising current minimums to $15/hour) due to the vast difference in magnitude. 

Nothing happens in a vacuum, and perhaps that’s the biggest issue I have with this discussion: many of the proponents of the $15/hour seem to think there is a magical pot of money available to cover the wage increases, you don’t have to consider the differences in local labor markets and there will be NO negative impacts. 

Again, I’m not advocating against raising minimum wage in general or that doing so will destroy jobs, instead, I’m questioning the wisdom of uniformly fighting for $15/hour without considering the mathematical realities of the local markets first. 

Instead I think we need mathematical and market driven approach to the minimum wage situation:

We set a new national wage floor that’s in the $9-$10 range. 

For areas deemed high income (like Seattle) we have a formula that would set the minimum wage at say 40-50% of the median income of that area, up to a certain cap. I’m proposing the cap because otherwise the minimum wage in some small cities outside of Seattle (and other affluent cities) would be in the $20-$22+ range, and it’s just not realistic that markets and businesses can sustain paying people in the mid 40s to work at many small businesses that pay minimum wage. 

Remember, in most cases it’s not the 1% paying minimum wage, it’s a small local business owner who earns a middle to upper middle income from his or her business, and while these businesses can absorb wage increases in the 10-20% range, but it’s a totally different story when we start talking about doubling wage costs.  

Either way, until both sides of this debate come together and have a realistic conversation based on the financial reality many businesses operate under, the situation seems ripe for either failure or fairly significant unintended consequences. “You have plenty of money, pay it and if you say no you’re greedy”, isn’t exactly a valid mathematical argument, which is exactly what this discussion needs.

The McDonald’s situation is a perfect case in point: 

As I noted before, a McDonald’s franchise is a local small business. The financials of the Franchisor (McDonald’s Corp) are totally separate from your local franchise. Just because McDonald’s could absorb higher costs at its corporate owned stores, doesn’t mean that Joe local franchisee can. Case in point: the typical wage costs at a franchise run at about 24% of that location’s revenue and the franchise owner is making about $0.05 on the dollar. Meanwhile the salary of the CEO of McDonald’s Corp is 0.011% of total revenue.

In other words: $0.24 of every dollar you spend at McDonald’s goes to the salaries of the people in the restaurant, $0.05 goes to a small local businesses owner and you have to spend $100.00 to contribute a penny to the CEO’s salary.

E.g. the situation with McDonald’s franchises* is more a concern around avoiding price increases that could hurt sales than it is one around greed or a business owner refusing to dip into his/her profits to pay workers. 

Said fears may or may not be legit depending on the market, but when you factor in wage increases, employer’s taxes, business taxes, etc., the cost of certain meals (especially for a family) would see a marked increase. Dinner at McDonald’s going from $26 - $33 might be a big deal for some people. 

Progressives fighting for income equality need to stop shooting themselves in the foot via shoddy analysis, misunderstanding cause and effect and comparing apples and oranges. It’s not helping the situation, it’s making things worse. 

Being a business owner doesn’t make one magically able to pay any wage demanded by activists, and until activists, workers and business owners all sit at the table, discuss their individual realities, the realities of the markets they operate under and come up with a realistic mathematically driven decision, I suspect we’ll either see A) minimal progress B) Or a forced solution that will be rife with unforeseen consequences. 

On the flip side conservatives need to stop acting like ANY minimum wage increase will destroy small business because that’s isn’t true either.  People do need to be paid more, the question offered is: what is a sustainable and realistic amount? 

As I noted before, a common sense, market and mathematically driven solution is what’s needed here, and while the wage increases it would produce won’t be what many activists want, it WOULD produce something sustainable and realistic.

In the end, isn’t that what everyone wants? 

*To be sure you could get rid of a lot of these issues if McDonald’s were to dump the franchise model (a lot of high paying fast food chains aren’t franchises, e.g. Dick’s in Seattle), but that would come with a lot of costs and thousands of small business owners would lose their livelihoods.

Nothing happens in a vacuum. 

Random Thoughts for Tuesday September 2, 2014

Apple Payments Rumors: I haven’t paid that much attention to them as they seem to be written from the perspective that mobile payments = a well established market, or that if Apple gets into the game that people will suddenly rapidly adopt the digital wallets they’ve been ignoring for years now.

 At the end of the day it comes down to this: the same consumers that have been ignoring tap to pay debit and credit cards for years, aren’t likely to suddenly start using tap to pay mobile payment apps without an incentive that goes beyond the app existing.

I know, I know, a bunch of my fellow nerds WANT to pay with their phones. Well guess what: in this case WE DON’T MATTER. Until consumers en mass want to use tap to pay (either with phones or their cards) merchants won’t install tap to pay POS devices, and the adoption of whatever Apple is planning will be muted/follow the same trajectory of everyone else’s digital payment plans.

Case in point: I installed a digital wallet on my phone as an experiment and I rarely come across a merchant where I can use it, and I live in highly affluent and tech forward Seattle.

What’s the situation for people in other parts of the country?

The phrase “Will it play in Peoria” really does apply here, as does the phrase “whats in it for me?”

I think consumers need some sort of incentive or rewards program to use the digital wallet that’s on TOP of whatever they get from using their credit card that’s attached to it. Otherwise it’s a zero sum game: using my mobile payment app or my credit card makes no difference. 

Until the tech industry stops looking at digital payments from the perspective of well, our fellow nerds and from the perspective of average people, I’m not sure digital wallets will gain much traction. Merely existing is not a value proposition.

I think the EMV upgrade cycle for retailer’s POS systems is a huge part of this too. Digital Wallets typically work via the same technology as “Tap to Pay” credit and debit cards, technology that has been around for about a decade but hasn’t gained much traction. At the moment tap to pay POS terminals are rather rare. SO, if retailers go through the massive EMV upgrade cycle over the next 13 months and buy gear that can handle EMV but not tap to pay, digital wallets will more or less be in the same position they are now. 

I.e. how can they gain traction when consumers have few places where they can use them, and retailers are reluctant to spend more money on upgrades due to having just spent billions on the EMV upgrade cycle? 

Digital Security: the iCloud breach and whether it was a fault with Apple or a fault with user passwords brings up a ton of issues around security, whether it’s taken seriously enough, where the fault lies, etc., but, one interesting wrinkle I noticed:

Whenever X service is breached, you always have people popping up claiming that the breach is evidence that a competing service is more secure.

Here is the thing: it isn’t necessarily evidence of anything from a security wise.

Criminals are going to always go after the most valuable target because they want the greatest return on their villainous investment, the breached service could actually be MORE secure than the non-hacked ones, but those other services weren’t as valuable so no one bothered attacking them.

It’s the same reason why the most stolen cars in America are always the most popular ones; thieves steal cars to make money, why steal un-popular cars with a limited market for parts?

So before you say “no one hacked my phone’s cloud service”, sit back and ask yourself: “does anyone think my phone’s cloud service is worth hacking?”

A device or service is more secure because it’s harder to breach, not because there is a lack of interest. 

Poor understanding of cause and effect

I hate articles like this.

NPR and other articles ran an article noting that states that raised minimum wage had higher job growth; the idea was to debunk the idea that raising the minimum wage would hurt jobs.

Here is the problem (quote from the article): “Economists who support a higher minimum say the figures are encouraging, though they acknowledge they don’t establish a cause and effect. There are many possible reasons hiring might accelerate in a particular state.”

In other words there was no definitive link established, in fact, the report didn’t even break down the TYPES of jobs being created. As you could easily have a situation like we have here in Washington State, where we lead the nation in job growth and have the highest minimum wage, which leads people to think the two are linked. HOWEVER, if you dig into the numbers you find increasing levels of unemployment outside of the Seattle area and Seattle are job growth being driven by the tech industry.

 I.e. minimum wage and job growth aren’t necessarily linked.

Anyway, I’m not arguing for or against minimum wage, I’m arguing for accurate headlines that aren’t just wishful thinking or being used to advocate for a viewpoint with scant evidence.

Especially when the minimum wage discussion needs to consider local markets and the magnitude of the increases, as $15/hour might work in one city and not in another, and going from $10-$12 in one city won’t have the same impact as going from $8-12 will in another city.

Any 4th grader who understands percentages can tell you that. 

It’s basic mathematical common sense that people seem to want to ignore. There is nothing wrong with advocating people to get paid more, but since money does grow on trees let’s inject some common sense and math into it.

Tying convenient variables together and claiming they’re related isn’t going to help anyone.

Tablet App Bait & Switch

It’s the most repeated form of advertising for software, whether it’s presented as a cloud based application, an “app”, or regular software:

You show a laptop screen, tablet screen and smartphone screen, all with the same GUI on them. The idea is that you more or less get the same functionality, layout, data presentation, etc., on the mobile version as you do on the fully featured desktop based one.

Sometimes though, this isn’t the case and it’s driving me up the wall.

The tablet version of a financial software package I use for a couple of side businesses and a non-profit I run doesn’t show the main page of the regular application (desktop or browser based), instead the main page is the least useful part of that front page. In fact there is NO way to get the app to show the same financial snapshot as the desktop or browser based versions, thus begging the question:

Why did I download this garbage in the first place?

A CRM application I use for my businesses is great on the desktop via my browser, but the Tablet version (again) doesn’t have the layout show in the ads for it, instead it’s a watered down version that makes you want to just use your laptop instead.

Making matters worse, if I try to pull it up on my Tablet’s browser I’m taking to the same annoying layout as the app. Thus leaving me with an app that’s useful to a degree, but makes you miss the desktop version so much that I’m often inclined not to use it at all.

For example:

If I were to pull up the financial software via the desktop app or the browser on my laptop, the first page shows bank balances, trailing 30 days profit/loss, amount of open invoices and pending bills. It’s the information you’d MOST want to see if you’re pulling up the financials on the road with a Tablet.

Instead I’m presented with a page that just shows recent history, I have the ability to take a few actions across certain categories and I have light reporting.

WTF?

The CRM app is significantly better as it’s actually proved useful on the road, but there is a lot of customer data updating and customization I can’t do. It’s often easier to just write something down and update it when I get back to my laptop.

Knowing that software developers tend to be intelligent I find myself scratching my head at this nonsense.

Didn’t the product development team test or review the tablet apps and think to themselves: “wow, we took out a lot of the usability?”

Didn’t the software development team say: “it’s just a browser based app, there is no reason we can’t deliver this same functionality to our customers on their tablets, especially certain dashboard views that show the key information they need”

I can’t imagine signing off on either app, I would’ve wanted both apps significantly revised so that you’re delivering value to customers and not irritating them.

I look at these apps and I see two companies that aren’t completely serious about their mobile app strategy, and instead want to just “have an app” as a selling point, but they don’t really mean for you to get much use out of it.

Meanwhile, the limiting factor for MS Excel for iPad is the fact that you’re using a device without a keyboard, mouse and an external monitor for more screen real estate, NOT the app itself.

No wonder I use Excel for iPad more than both of the apps I mentioned combined.

Again, this shows the company’s commitment as in many respects MS Excel is used for more complex activities than the apps I mentioned previously.

The above is why I think there is a low signal to noise ratio when it comes to app ecosystems, whether it’s for a platform or an individual company touting its app. Software companies just want to “have an app” so they seem legit, and the companies behind the various app stores/hardware platforms just want to push their app numbers up.

Delivering value to the customer is a secondary concern.

If legit 3rd player is going to emerge to take on the iOS/Android Duopoly I think the key will be having a high signal to noise ratio app wise.

A platform that says: “We set a quality bar for the functionality of our apps so you don’t feel like the victim of bait and switch”, will be sure to win customers in the coming years as at some point, people are going to get fed up with this nonsense. 

This is especially true for cloud-based apps. Just think about it: if I have to pay the same license fee regardless of the device, if I use the app on a singular device or multiple devices, there is really no reason for making the tablet version rubbish.

P.S. you’re probably wondering why I didn’t call out the companies specifically, the reason is that I have college friends who work at both companies so I felt bad ripping their employers in public. Plus I have better channels to take my complaints too, so no need flogging them in public.  

Flawed Technology Reviews & Media Coverage

It occurs to me that a lot of hardware reviews are inherently flawed.

But first, a quick side-step for a thought experiment:

Imagine if car reviewer said he was going to test out a 2015 Honda Accord competitor for a month to see how it stacks up against the class leader. A month later the reviewer comes back with a long list of superlatives of how the new car is better than his 2005 Accord and how he’s going to immediately switch to this new car.

Would anyone take that review seriously, especially people who are about to replace their ’05 Accords or who drive say 2011, 2012 or 2015 Accords?

Throw in any other model that’s considered best in class like a BMW 3-Series or Mercedes E-Class, and you’re sure to get the same level of “you’re joking, right?”

So considering how much faster hardware (Laptops, Tablets, Smartphones) advance compared to cars, where one smartphone year is like four car years, why do we tolerate the same with hardware reviews?

Why do we tolerate reviewers comparing the latest and greatest from one company against their older, PERSONAL devices?

Shouldn’t the latest Surface Pro 3 be compared against the latest MacBook Air, and not the reviewer’s older model?

If a reviewer does the “I used X phone for a month” thing, shouldn’t he or she compare it to the latest and greatest from OS’? E.g. if you review the latest Android phone, compare it to a fresh brand new iPhone and Windows Phone.

After all, depending on a host of factors, the differences between two devices could easily just come down to the reviewer screwing up their own device, the software they’ve installed, wear and tear, the time they dropped it, etc.  

We don’t tolerate old and used vs. new for cars, so why are we tolerating it for tech? It goes against common sense and the real world impact of using your devices in day to day life.

Better yet what’s more valuable:

I tried out the new MacBook Air and I like it better than my two year old PC and I’m switching/I tried a new Surface Pro 3 and I like it better than my 2 year old MacBook Air so I’m switching

OR

I used a Brand New MacBook Air and and Surface Pro 3 side by side for six weeks and after careful consideration, I prefer…. 

Final thought: I wish tech writers and reviewers would stop revealing the devices they own and stop saying things like “I cover Apple/Google/Microsoft”, or tweeting about how they’re traveling to a company’s HQ to interview random people, etc. Because from the perspective of someone who has worked within various large tech companies, it makes it impossible to take these writers seriously, as too many of them sound like, well, employees of the company when it comes to how they write about the companies they cover, their tweets, etc.

I say this as I’ve definitely read articles and thought: “wait, this sounds like the internal emails touting that product launch”.

The other day, I saw a tech writer tweet something pro a company they “cover” from the comments section of an article. 

How is that journalism when you’re actively promoting a single company, instead of objectively discussing it and other companies? How can you really give your readers an objective view of the marketplace when you’re only focused on one participant? 

Call me crazy but if a writer only studies smartphone offerings from Apple, Google or Microsoft, instead of studying them all, how can they truly give their readers anything resembling intelligent insights? 

I say this because as someone who actually works in technology I don’t have the luxury of “choosing a side”. I use a Windows 8.1 and MacBook side by side daily. I use multiple smartphone platforms and I’m going to get an Android and Windows tablet to go with my iPad, so I can speak intelligently about all of them to potential customers/clients. 

The tech mediasphere already suffers from a low signal to noise ratio and lack of objectivity, fans and PR shills masquerading as journalists just makes it worse.

Better yet - when it comes to choosing a side, the success of MS Office for iPad (the reason I bought an iPad) proves that technology companies can’t think in this manner either. 

So why are the journalists that “cover” them?