Fast Company Design has written Tech Has A Diversity Problem–So This Designer Went To Kentucky, about John Maeda's work pairing some of the top designers in the world with students in Paintsville, Kentucky.
James Beshara has a really interesting read on how communication will change and evolve in a post-verbal world, namely one where human/brain interfaces like Neuralink can more directly transmit thought between people than the medium of language allows today.
After reading the essay I wonder if people's thoughts or the neural pathways they activate, if they could be directly transmitted into another brain, would actually make any sense to someone else with a unique internal set of pathways and framework for parsing and understanding the world. The essay assumes we'd understand and have more empathy with each other, but that seems like a leap. It seems likely the neural link would need it own set of abstractions, perhaps even unique per person, similar to how Google Translate AI invented its own meta-language.
Today idea-viruses that cause outrage (outrageous?) in today's discourse have been weaponized by algorithms optimizing for engagement, and directly brain-transmitted memes seem especially risky for appealing to our base natures or causing amygdala hijack. But perhaps a feature of these neural interface devices could counteract that, with a command like "tell me this piece of news but suppress my confirmation bias and tribal emotional reactions while I'm taking it in."
I love USB, cables, and charging things. So MacRumors comparison of different wired and wireless charging options and speed for the iPhone X is my catnip. tl; dr: USB-C + USB-C-to-Lightning cable gives you far and away the fastest times. I've found this true for the iPad Pro as well.
I really enjoyed connecting with the WordPress community in Nashville this previous weekend. On Saturday I delivered the State of the Word presentation alongside Mel, Weston, and Matías. There's always a post-event buzz but I definitely noticed a change in tenor of people's thoughts on Gutenberg after the presentation and demo. The video is above, check it out when you get a chance.
This is a long quote/excerpt from Adam Robinson I’ve been holding onto for a while, from Tribe of Mentors. Worth considering, especially if you strive to work in a data-informed product organization.
Virtually all investors have been told when they were younger — or implicitly believe, or have been tacitly encouraged to do so by the cookie-cutter curriculums of the business schools they all attend — that the more they understand the world, the better their investment results. It makes sense, doesn’t it? The more information we acquire and evaluate, the “better informed” we become, the better our decisions. Accumulating information, becoming “better informed,” is certainly an advantage in numerous, if not most, fields.
But not in the eld of counterintuitive world of investing, where accumulating information can hurt your investment results.
In 1974, Paul Slovic — a world-class psychologist, and a peer of Nobel laureate Daniel Kahneman — decided to evaluate the effect of information on decision-making. This study should be taught at every business school in the country. Slovic gathered eight professional horse handicappers and announced, “I want to see how well you predict the winners of horse races.” Now, these handicappers were all seasoned professionals who made their livings solely on their gambling skills.
Slovic told them the test would consist of predicting 40 horse races in four consecutive rounds. In the first round, each gambler would be given the five pieces of information he wanted on each horse, which would vary from handicapper to handicapper. One handicapper might want the years of experience the jockey had as one of his top five variables, while another might not care about that at all but want the fastest speed any given horse had achieved in the past year, or whatever.
Finally, in addition to asking the handicappers to predict the winner of each race, he asked each one also to state how confident he was in his prediction. Now, as it turns out, there were an average of ten horses in each race, so we would expect by blind chance — random guessing — each handicapper would be right 10 percent of the time, and that their confidence with a blind guess to be 10 percent.
So in round one, with just five pieces of information, the handicappers were 17 percent accurate, which is pretty good, 70 percent better than the 10 percent chance they started with when given zero pieces of information. And interestingly, their confidence was 19 percent — almost exactly as confident as they should have been. They were 17 percent accurate and 19 percent confident in their predictions.
In round two, they were given ten pieces of information. In round three, 20 pieces of information. And in the fourth and final round, 40 pieces of information. That’s a whole lot more than the five pieces of information they started with. Surprisingly, their accuracy had flatlined at 17 percent; they were no more accurate with the additional 35 pieces of information. Unfortunately, their confidence nearly doubled — to 34 percent! So the additional information made them no more accurate but a whole lot more confident. Which would have led them to increase the size of their bets and lose money as a result.
Beyond a certain minimum amount, additional information only feeds — leaving aside the considerable cost of and delay occasioned in acquiring it — what psychologists call “confirmation bias.” The information we gain that conflicts with our original assessment or conclusion, we conveniently ignore or dismiss, while the information that confirms our original decision makes us increasingly certain that our conclusion was correct.
So, to return to investing, the second problem with trying to understand the world is that it is simply far too complex to grasp, and the more dogged our at- tempts to understand the world, the more we earnestly want to “explain” events and trends in it, the more we become attached to our resulting beliefs — which are always more or less mistaken — blinding us to the financial trends that are actually unfolding. Worse, we think we understand the world, giving investors a false sense of confidence, when in fact we always more or less misunderstand it.
You hear it all the time from even the most seasoned investors and financial “experts” that this trend or that “doesn’t make sense.” “It doesn’t make sense that the dollar keeps going lower” or “it makes no sense that stocks keep going higher.” But what’s really going on when investors say that something makes no sense is that they have a dozen or whatever reasons why the trend should be moving in the opposite direction.. yet it keeps moving in the current direction. So they believe the trend makes no sense. But what makes no sense is their model of the world. That’s what doesn’t make sense. The world always makes sense.
In fact, because financial trends involve human behavior and human beliefs on a global scale, the most powerful trends won’t make sense until it becomes too late to profit from them. By the time investors formulate an understanding that gives them the confidence to invest, the investment opportunity has already passed.
So when I hear sophisticated investors or financial commentators say, for example, that it makes no sense how energy stocks keep going lower, I know that energy stocks have a lot lower to go. Because all those investors are on the wrong side of the trade, in denial, probably doubling down on their original decision to buy energy stocks. Eventually they will throw in the towel and have to sell those energy stocks, driving prices lower still.
In the lead-up to WordCamp US we're in right now I chatted with Brian Krogsgard at Post Status in an hour podcast and we spoke about the core releases this year, Gutenberg, React, WooCommerce, and WordPress.org. On the 29th I'll be talking to WP Tavern, so tune in then as well. For something completely different, I was on the new OFF RCRD podcast with Cory Levy about the earliest days at Automattic and entrepreneurship.
When I look back over the last 25 years, in some ways what seems most precious is not what we have made but how we have made it and what we have learned as a consequence of that. I always think that there are two products at the end of a programme; there is the physical product or the service, the thing that you have managed to make, and then there is all that you have learned. The power of what you have learned enables you to do the next thing and it enables you to do the next thing better. — Jony Ive
From the Wallpaper article on the new Apple campus.
As an interim update to my 2017 gear post, I'd like to strongly endorse the Aer Fit Pack 2 as my new primary backpack, replacing the Lululemon bag I suggested before. It has better material, much better zippers, a logical design, more pocket distribution inside, and it's cheaper! I put this bag and its predecessor through all the rounds, including taking it to Burning Man, and it's been a champ. If you're reading this and work for Automattic, this bag is also now available as an official choice for your bag and it'll come embroidered with a cool logo. (Previously we only offered Timbuk2.)
As you prepare for Halloween you'll enjoy this Drake parody, especially if you're familiar with his catalog.
[Gauguin] was penniless and adrift, trying to paint his way through the devastations of his dying marriage, his rejection by the cliques of the Parisian art establishment, and the precarity of his friendship with Vincent van Gogh, who shortly before Christmas had assaulted him with a razor and, after Gauguin’s departure that evening, used the same blade to cut off his own ear […] Despite the promises of the name, it can be a challenge to find actual olives at Olive Garden.
Probably my favorite food writing I've read this year is Helen Rosner's comprehensive review of Olive Garden for Eater.
Matias Ventura, the lead of the editor focus for WordPress, has written Gutenberg, or the Ship of Theseus to talk about how Gutenberg's approach will simplify many of the most complex parts of WordPress, building pages, and theme editing. If you want a peek at some of the things coming down the line with Gutenberg, including serverless WebRTC real-time co-editing.
Nautilus Magazine has an interesting look at the question of Is Matter Conscious? Worth reading to learn what the word "panpsychism" means. Hat tip: John Vechey.
I am surprised and excited to see the news that Facebook is going to drop the patent clause that I wrote about last week. They’ve announced that with React 16 the license will just be regular MIT with no patent addition. I applaud Facebook for making this move, and I hope that patent clause use is re-examined across all their open source projects.
Our decision to move away from React, based on their previous stance, has sparked a lot of interesting discussions in the WordPress world. Particularly with Gutenberg there may be an approach that allows developers to write Gutenberg blocks (Gutenblocks) in the library of their choice including Preact, Polymer, or Vue, and now React could be an officially-supported option as well.
I want to say thank you to everyone who participated in the discussion thus far, I really appreciate it. The vigorous debate and discussion in the comments here and on Hacker News and Reddit was great for the passion people brought and the opportunity to learn about so many different points of view; it was even better that Facebook was listening.
Big companies like to bury unpleasant news on Fridays: A few weeks ago, Facebook announced they have decided to dig in on their patent clause addition to the React license, even after Apache had said it’s no longer allowed for Apache.org projects. In their words, removing the patent clause would "increase the amount of time and money we have to spend fighting meritless lawsuits."
I'm not judging Facebook or saying they're wrong, it's not my place. They have decided it's right for them — it's their work and they can decide to license it however they wish. I appreciate that they've made their intentions going forward clear.
A few years ago, Automattic used React as the basis for the ground-up rewrite of WordPress.com we called Calypso, I believe it's one of the larger React-based open source projects. As our general counsel wrote, we made the decision that we'd never run into the patent issue. That is still true today as it was then, and overall, we’ve been really happy with React. More recently, the WordPress community started to use React for Gutenberg, the largest core project we've taken on in many years. People's experience with React and the size of the React community — including Calypso — was a factor in trying out React for Gutenberg, and that made React the new de facto standard for WordPress and the tens of thousands of plugins written for WordPress.
We had a many-thousand word announcement talking about how great React is and how we're officially adopting it for WordPress, and encouraging plugins to do the same. I’ve been sitting on that post, hoping that the patent issue would be resolved in a way we were comfortable passing down to our users.
That post won't be published, and instead I'm here to say that the Gutenberg team is going to take a step back and rewrite Gutenberg using a different library. It will likely delay Gutenberg at least a few weeks, and may push the release into next year.
Automattic will also use whatever we choose for Gutenberg to rewrite Calypso — that will take a lot longer, and Automattic still has no issue with the patents clause, but the long-term consistency with core is worth more than a short-term hit to Automattic’s business from a rewrite. Core WordPress updates go out to over a quarter of all websites, having them all inherit the patents clause isn’t something I’m comfortable with.
I think Facebook’s clause is actually clearer than many other approaches companies could take, and Facebook has been one of the better open source contributors out there. But we have a lot of problems to tackle, and convincing the world that Facebook’s patent clause is fine isn’t ours to take on. It’s their fight.
The decision on which library to use going forward will be another post; it’ll be primarily a technical decision. We’ll look for something with most of the benefits of React, but without the baggage of a patents clause that’s confusing and threatening to many people. Thank you to everyone who took time to share their thoughts and give feedback on these issues thus far — we're always listening.
The illustrious Chance the Rapper was looking for a new intern.
I'm looking for an intern, someone with experience in putting together decks and writing proposals
— Lil Chano From 79th (@chancetherapper) March 27, 2017
Some people responded with regular resumes, replying as images, but Negele “Hopsey” Hospedales decided to make a website on WordPress.com:
maybe I'm extra, but I think resumes are old fashion. I built a website instead. #ChanceHireHospeyhttps://t.co/DmYvxAQu61
— madebyhosp. (@Hospey) March 28, 2017
The happy ending is written up in Billboard: he got the gig and went on tour with Chance. Hospey wrote a great article on it himself: How To Work For Your Favourite Rapper.
So, I’ve Been Thinking
Recently, I had the chance to sit down for a drink with Grady Booch. For anyone who doesn’t know his name yet, he’s a technology pioneer, innovator, and all-around fascinating guy. He was a primary creator of the Unified Modeling Language, and his career has included everything from work at NASA (where he was literally the guy sitting in front of the big red self-destruct button during launches) to his current gig serving as Chief Scientist for Software Engineering at IBM Research. I can also tell you he makes a mean Hawaiian-twist margarita.
Grady’s been at the center of some of the greatest developments in coding and technology in the past few decades which makes him a deep well for serious topics. Our conversation touched a lot of areas, but I was most fascinated by his take on one topic that the technology sector wrestles with every day: the ethics of code.
I don’t think it’s contentious to say that digital innovations are driving changes in every industry and sector at a pace that we have never seen before. Some of those changes have led to large-scale, fundamental shifts in the business landscape, and some of them have led to smaller, more nuanced opportunities for new and existing businesses. All of those changes, however, have the potential to affect people in more than just the positive ways we have in mind when we code.
From the Luddite Rebellion of 1811 to the Lamplighter’s Union fight against the electric arc lamp in the 1890s, worries about automation displacing human jobs has existed for literally millennia. Those fears have been offset by the reality that change typically takes place slowly. Robots, for a more modern example, didn’t take all the manufacturing jobs overnight. Instead, robotics has gradually reduced the need for “hands on” humans in the factory over the past several decades. The jobs lost weren’t effortlessly absorbed into the economy, but the shift happened slowly enough that they could ultimately be absorbed.
Today, the fear of automation displacing jobs that can’t be absorbed is far more possible. Technology is progressing at a breakneck speed no matter where you turn, and no industry seems insulated from waves of innovation that use automation to do things more efficiently and effectively. What was once just a concern for manufacturing workers is now a concern for everyone whose work has any analytical or repetitive features. Want to build a car with no factory workers? Look no further than the Tesla plant. If you need an appendectomy on the other hand, you’ll still need a surgeon for their dexterity for the very near future. It’s not far-fetched to imagine a future, however, when an attendant might oversee an automated appendectomy like a Starbucks barista making digital selections on a Mastrena Espresso Machine.
The work we’re doing in tech carries incredible weight that we may often take too lightly.
Factories or hospitals, the work we’re doing in tech carries incredible weight that we may often take too lightly. We are actively finding ways to increase efficiency in every field—and I laude that. But as our enterprise level efficiencies move up the hockey stick, we need to start thinking about jobs the same way we balance environmental impacts of our work. The impacts of our work goes well beyond the innovations we create. If it hasn’t been asked before, it’s time to ask now: what ethical responsibilities do we have as we use code to transform the world?
Concern over the ethics of code opens the door to larger conversation about how Artificial Intelligence, along with the changing ways we work, is incubating a new economic model in the West. It’s a model that requires different competencies and job types, but it also has the potential to empower humans like never before in our history.
The Implications of AI
Visions of AI have tantalized, inspired and terrified us for years. From Hal 9000 to Ex Machina, we portray AI as a conscious super intelligence or super villain. The reality is much more benign in the Hollywood sense and more insidious in its potential impact to our economy. The AI that’s real today is known as “Narrow AI” or “Applied AI” and it does very specific work like managing your calendar, finding a song that’s similar to others you like, giving you directions that route you around traffic and beating you in chess. It’s what many of us are working on every day, and, despite our fears of super-intelligence, Narrow AI is what is actually changing everything.
Dr. Rand Hindi, founder of Snips.ai, broke this down in detail in an article with a title that I love: “How My Research in AI Put My Dad Out of a Job.” Beyond the ethical jam, his point was that we shouldn’t worry about super-intelligence despite all the big names in tech who have come out with dire warnings. The reality is that super-intelligence could be a distant dream, and as Dr. Hindi puts it, we’re “missing the point that in the next decade, Narrow AI will already have destroyed our society if we don’t handle it correctly.” Though the warning is a bit hyperbolic, it’s true that when we focus on super-intelligence (also known as Strong AI for Artificial General Intelligence) we forget that Narrow AI’s inherently limited scope means that coders are working on discrete uses in every imaginable way. Narrow AI will replace or transform any job where information gathering and pattern recognition drive a volume business. That’s not just laborers. That’s accountants, traders, realtors, lawyers, software developers and on and on. The jobs can be low pay or high pay, but either way, AI can do it faster.
We’re already beginning to see how AI will become invaluable in these fields. For instance, one Canadian firm – Blue J Legal – is using AI to help accountants and tax lawyers predict how courts are likely to rule on a given set of facts and client circumstances years into the future. A Palo Alto-based legal startup, Casetext, is enabling lawyers to upload briefs and have AI do the case research work of hundreds of paralegals. In Japan, Fukoku Mutual, an insurance firm, is replacing 34 claims adjustors per instance with AI built from IBM’s Watson. In the US, we are particularly susceptible to Narrow AI affecting the industry. PwC found earlier this year that 38% of all US jobs are at a high risk for automation in the next 15 years. That’s just one of a number of studies that have reached the same conclusion: the next two decades will be a wild one for our economy if we don’t make planful changes soon.
Immunity to AI
That’s certainly not to say that every kind of job in the US is at risk. There is such a thing as “immunity to AI,” at least for the few couple decades. The simplest way to identify jobs that are insulated is to ask, “Does it require emotional intelligence or ‘non-patterned’ based decision making?” Ultimately, that leads to three broad categories of jobs.
The first category is jobs that require meaningful creative interactions with other people. Narrow AI can advise on the most successful closing strategies for a particular case, but it’s not capable of making a compelling closing argument in court. Even if we use an AI system to develop an argument based on the court’s preferences, to identify and incorporate all of the relevant case law and to select words and phrases that most people find persuasive – Narrow AI lacks a clear path to replace the human ability to deliver an argument to humans or to adapt mid-stride in reaction to others.
The same can be said for any number of professions. Marketing strategy and design will need human creativity and emotion. HR will need people to listen, empathize and make the right, context-based decisions. Nurses will need to bring humanity to patient interactions and treatment. Teachers will need to bring expertise and learner-specific strategies to education. Even customer service will need humans in place to receive escalations that go beyond an AI’s ability to address.
The second category of jobs are those that won’t be replaced (yet) due to limitations of robotics. Our ability to code has progressed far faster than our ability to build machines capable of fine motor skills or dealing with unpredictable physical challenges. Repetitive physical tasks are one thing, but as a report from McKinsey & Company last year pointed out, even maid service in hotel goes beyond the capabilities of autonomous machines. For example, everyone throws towels and pillows in different places, and automated robots simply can’t deal with that degree of difference in a cost-effective way. And though we are aggressively developing more advanced robots, it’s expensive and time-consuming to build them, meaning fields like on-site construction will largely have security for the foreseeable future even as the tools of the jobs change. None of that is to say that AI will not impact these first two categories of jobs. In fact, the most likely scenario is that many of these jobs will transform to work side by side with Narrow AI tools sooner than later.
The third category of AI insulated jobs are entrepreneurs. Be it a startup founder or a food truck operator who works alone, entrepreneurial roles require aspects of the of the first and second categories to various degrees. Small business entrepreneurs and solopreneurs wear many hats on any given day—be it CEO, CMO, CFO, CIO, etc. That diversity of work makes entrepreneurial work very difficult to automate.
Ethics of Code
So on one hand, we have jobs that are “safe from AI” while on the other we have jobs that are likely to be displaced. Where does that leave us as coders and technologists? If you listen to Grady’s Ted Talk on Superintelligence, you’ll hear him say, “The rise of computing itself brings to us a number of human and societal issues to which we must now attend. How shall I best organize society when the need for human labor diminishes?”
I don’t believe we should ignore the “I” in that question. The ethical dilemma we face in technology is one of our own creation, and that, to me, means it’s incumbent on the tech community to deliver the solution as well. Said simply, if you’re aware that the work you’re doing is going to displace jobs, you should be intentional in your effort to leverage technology to create new opportunities for the displaced.
Said simply, if you’re aware that the work you’re are doing is going to displace jobs, you should be
intentional in your effort to leverage technology to create new opportunities for the displaced.
Snap.ai’s Dr. Rand Hindi proposes an interesting idea for social and governmental programs that would support an economic framework that will make widespread, AI-driven transformation sustainable. His argument is that the end result of displaced or altered jobs due to AI is a population that must be more educated to do the job of managing or interfacing with AI. That means we need to incentivize people to have ongoing, skills-based education in technology.
Dr. Hindi poses Universal Educational Income, a system in which people would receive a monthly salary as long as they are enrolled in some kind of educational program. There are any number of challenges that come to my mind when any universal income is proposed—from who funds that scope of spending to can it ever be enough to make a difference. It’s not an obviously viable policy but I can certainly appreciate the beauty of the idea: create a system that engineers people into the AI equation. By incentivizing people to constantly learn, you have a more prepared workforce for a new economy. It’s a fascinating possible solution, and I believe the spirit of engineering our culture into an AI fueled economy is the right one. That said, I believe there are better ways to make that happen.
Engineering People In
First, I believe a simple premise is true: the faster we advance AI, the more we will drive demand for humans to manage and direct what AI makes possible. The reality is we are heading towards a huge supply of Narrow AI in the economy. Look at marketing for example, a field that is seeing a huge amount of investment in predictive AI technologies. Even as AI becomes acutely capable of optimizing ad spend and placement, the two select roles of the Marketing and Creative Directors actually grow in importance. The repetitive work is displaced, but demand for the creative thinking is actually on the rise. In other words, there has never been a better time to have the entrepreneurial spirit because technology and market forces are in place to support you.
Steve Case, CEO of Revolution LLC, gave a perfect example of this in a recent LinkedIn post. Two hundred years ago farming represented 90% of the American workforce. Now, that number is less than 2%. Rather than purely displacing jobs, technology made farmers more efficient and productive, and new jobs were created by the need to supply and support modern agriculture. In a modern context, it’s easy to envision new entrepreneurial roles that wouldn’t be possible without AI—ones made possible by bundling creativity and dexterity with deep analytical insights.
What jobs will best augment or enhance what AI can do? How can the tech industry be as instrumental in creating jobs as we are in displacing them? These are question that everyone driving tech automation should be thinking about. I’m pushing (at GoDaddy) to drive a platform to empower entrepreneurs to make their ideas real with the help of machine learning tools and predictive analytics to guide their decision making. I think that’s one important way to help make our economy immune to AI, but I’d like to challenge the industry to think of a hundred more solutions—and then get working to test them.
For entrepreneurial options, our goal should be to deploy Narrow AI in a way that encourages more and more people to experiment with the self-driven ventures. If we engineer tools that reduce the barriers to access through elegantly simple systems and widespread availability, then the technology we build for efficiency can help us empower economic participants at the same time. There’s no doubt that we can be the drivers of a new economy with new companies and new careers – but we have to be intentional about that role.
Finally, I think the tech industry needs to be a louder voice in the real risks to our economy that Narrow AI is creating right now. Grady Booch and other luminaries shouldn’t be left to carry the entire load. More of us need to clearly articulate why people should be excited about the promise of AI and its real economic dangers. We aren’t building Skynet, but we might be building something just as dangerous for billions of people if we don’t purposefully create new opportunities as the old economy passes.
Where I Land
Larry Niven once said, “That’s the thing about people who think they hate computers. What they really hate is lousy programmers.” That’s a timely and true quip in its own right, but it should also remind us that we are the one’s behind the code. We have an ethical opportunity to consider and attempt to address what will happen because of our code.
As we create new applications for AI that make it possible for seemingly once magical automation to happen, we should devote some of our time and energy to figuring out how to make more people magicians. Let’s help more people become builders of the new economy by putting the power of what we build in their hands as quickly and simply as possible. That’s how we’ll begin to see the new jobs and businesses emerge that will drive a new economy forward. No matter what, we need to bring our own humanity to bear every time we type a line of code. If we can do that, there will certainly be no reason to fear Skynet – but there will also be a lot to be excited about thanks to the future of AI.
Your Voice Wanted
One of the best ways for me to mature my thoughts on the ethics of code is to hear from you. Please share your thoughts below—I tend to be on my blog in the evenings, so look for my responses then.
The company Bayer is famous for inventing aspirin in 1898, which is arguably one of the world’s most beloved brands, and for good reason. But I was surprised to learn that just two weeks earlier, the same three guys who gave the world aspirin also created Bayer’s other big brand, heroin, which was marketed for about eight years as the world’s best cough medicine.
From Andrew Essex on his book about the End of Advertising. Hat tip: John Maeda.
I found this funny anecdote from a CNET article about the future of power:
Power and utility companies must exactly balance supply with what people consume at any given moment. UK grid operators famously must cope with a demand surge after the TV soap opera “EastEnders” ends, when thousands of people start boiling water for tea.
Last week we released version 4.8 “Evans” of WordPress, as I write this it has had about 4.8 million downloads already. The release was stable and has been received well, and we were able do the merge and beta a bit faster than we have before.
When I originally wrote about the three focuses for the year (and in the State of the Word) I said releases would be driven by improvements in those three areas, and people in particular are anticipating the new Gutenberg editor, so I wanted to talk a bit about what’s changed and what I’ve learned in the past few months that caused us to course correct and do an intermediate 4.8 release, and why there will likely be a 4.9 before Gutenberg comes in.
Right now the vast majority of effort is going into the new editing experience, and the progress has been great, but because we’re going to use the new editor as the basis for our new customization experience it means that the leads for the customization focus have to wait for Gutenberg to get a bit further along before we can build on that foundation. Mel and Weston took this as an opportunity to think about not just the “Customizer”, which is a screen and code base within WP, but really thinking in a user-centric way about what it means to customize a site and they identified a number of low-hanging fruits, areas like widgets where we could have a big user impact with relatively little effort.
WordPress is littered with little inconsistencies and gaps in the user experience that aren’t hard to fix, but are hard to notice the 500th time you’re looking at a screen.
I didn’t think we’d be able to sustain the effort on the editor and still do a meaningful user release in the meantime, but we did, and I think we can do it again.
4.8 also brought in a number of developer and accessibility improvements, including dropping support for old IE versions, but as I mentioned (too harshly) in my first quarter check-in there hasn’t been as much happening on the REST API side of things, but after talking to some folks at WordCamp EU and the community summit before I’m optimistic about that improving. Something else I didn’t anticipate was wp-cli coming under the wing of WP.org as an official project, which is huge for developers and people building on WP. (It’s worth mentioning wp-cli and REST API work great together.)
To summarize: The main focus of the editor is going great, customization has been getting improvements shipped to users, the wp-cli has become like the third focus, and I’m optimistic about REST-based development the remainder of the year.
I’ll be on stage at WordCamp Europe in Paris tomorrow afternoon doing a Q&A with Om Malik and taking audience questions, will also have a few announcements. You can get to the livestream tomorrow on the WordCamp EU homepage.
Christopher Mims writes for the Wall Street Journal Why Remote Work Can’t Be Stopped, also riffing off the IBM shift I wrote about a few weeks ago. I was excited to see an Automattician Julia featured at the top and a few other colleagues having their voice in the article.