On many client projects, there is a need to create custom post types. I usually create them using the wp scaffold post-type command from the WP-CLI. This will create the necessary PHP functions with the most common definitions and a large array where you can set nice labels. But in the most basic way, registering a post type could looks like this:
Around two weeks ago, the Global Leads, the Local Lead and the Mentor of WordCamp Europe 2021 took the hard decision to postpone the in-person event in 2021 in Porto for another year and organize the next edition of WCEU online again. After the great success of this year’s event, the new organizing team will have the chance to organize an even bigger event in 2021, giving them more time focusing on the special needs for an online event. The Call for organizers ended just around a week ago. I’m wishing the new organizing team all the best – which I will not be a part of.
Last week I started with a new project and developed a theme using Underscores as the base theme. I always use the “sassified” version and create it with the WP-CLI scaffold command. When I inspected the files, I could see some new files in the directory. But one of the files that is included for quite some time is the package.json file, defining various npm tasks, of which some are responsible to compile Sass files or watch for file changes on those files.
Using file watchers from PhpStorm
In the past I have usually used the file watchers from PhpStorm. In a blog post from January 2018, I’ve explained how to set such a file watcher up to compile Sass files. This is a good way when you don’t have any other mechanism to compile those files bundled with the software/theme/plugin. But if you develop something that has a compilation task defined in the projects, things can get more complicated, especially when you work in a team with members of that team using different operation systems and default Sass compilers. In the past a Ruby compiler was the recommended option, but as of March 2019, this option has been depecated. Now it’s recommended to use either Dart Sass or Lib Sass to compile your files. I’ve tried out Dart Sass, as it was the only compiler that was producing the exact same CSS files on Windows, Linux and Mac in my tests.
So why not just use one of the new compilers in combination with a PhpStorm file watcher? There are two reasons. First, those file watchers are not easy to set up, especially when the folder structure is different to the default structure. This is the case for Underscores, where the style.scss files it stored in the sass sub folder, but the compiled style.css file is in the root folder. The seconds reason is the possibility of additional compilation tasks defined in the package.json file. A common example for that is a PostCSS plugin like the Autoprefixer. Those compilers would not run on the Sass file watcher in PhpStorm.
Running a watcher from the package.json file
You can run an npm (or similar) task in PhpStorm in two ways. You either start a terminal and run the command there (which will exit, as soon as you close the terminal) or you run it using “Task | Run Gulp/Grunt/npm Task” from the menu bar.
From here you simply double click one of the tasks and it will run in a new task window in PhpStorm. But every time you develop on that project, you have remember to start that task manually, while the file watcher was “starting” automatically. So isn’t there a better way?
Using a startup task to run the watcher
Fortunately there is a very easy way I’ve found some weeks ago. You can define startup tasks that would run every time you open a project in PhpStorm. In the settings at “Tools|Starup Tasks” you can choose from a wide variety of tasks to run. One of them is npm. So first, you just add such a startup task:
If you have ran the npm task manually before, as described above, you can simply select it on the left. If you haven’t you simply create a new task giving it a name, selecting the package.json file and the command and script to run on startup:
Once you’ve set the startup task, simply restart PhpStorm or close and reopen the project to test the startup task. You should see the same run window opening as in the manual run.
Conslusion
The file watchers in PhpStorm are really useful when you work on a project that is not using any task runner. But it’s limited and the setup can be more complicated. Using the startup tasks makes it a lot easier to use the tasks bundled in a projects and ensures that you and your team are all using the same tasks. This can dramatically reduce issues like merge conflict on differently complied files and similar issues.
Even the largest hard drive has a limit. From time to time, you have to clean up some file to be able to work on some new projects. Working on multiple WordPress projects, old and new ones, can use up a lot of storage. In a previous blog post from earlier this year, I’ve showed you how to save storage by loading images from a live website. As images and other files in the uploads folder are usually the largest parts of a WordPress website, this already helps a lot. But there is another type of files, that can easily fill up you hard drive in now time: database dumps.
Finding large database dumps
When you work on projects that have already gone live, you probably want to get the latest database from the live website and import it locally. But you might also have done some settings in your local environment, so you make a backup of the local database before replacing it with the live one. That’s already two dumps. Then you play around a bit with the content and just in case make some more backups. And as some WordPress databases easily have some hundred megabytes, your free disk space is reduced quite fast. So the first step you should take is to find all those large database dumps on your local disk. For that, you can simply run this command in the main local development folder:
find . -type f -size +10M -name "*.sql"
This command will find any SQL dump that is larger then 10 MB. You might even want to search for smaller files, if you have lots of them which sums up to some hundred megabytes in total. If you wand to see how large all of these files are, you simply extend the command and run it through the du command:
find . -type f -size +10M -name "*.sql" -exec du -h "{}" \;
Now that you have found all of these files, you can delete those you really don’t need anymore. But what about files you still might need?
Compressing all database dumps
Well, you can simply use another command and compress all of them in one go. You just have to run this command:
This will compress all database dumps larger than 10 MB using the gzip command. I’ve run that command on my work laptop some days ago and was able to save more than 3.5 GB on my drive. It only took about 90 seconds to run. If I need to restore any of the compressed dumps, I would use ungzip them before and then import the dump into the database.
Find other large files
Once you’ve done with compressing all database dumps, why not search for other types of large files that can easily be compress or deleted? Some other good candidates are logfiles, like the debug.log you will find in the wp-content folder, when you’ve activated the WP_DEBUG_LOG in an installation. To find those files, either replace the search pattern or just remove it completely, to find all large files:
find . -type f -size +10M -exec du -h "{}" \;
Searching without a file extension will probably also show you a lot of media files, that are easily large as well. But then you might use the trick mentioned earlier to load those from a remote server.
Another type of files you may find are XML files. They may come from the WordPress XML Exporter, but they could also some other type of files, that needes to be left uncompressed.
Caveat
Some of you may think now “Why not just compress all .sql files?”, but this can cause some issues. If you search without the size filter, you will find some files within plugins. Those files are usually used to create the necessary database tables, when installing/activating the plugin. If you compress them, the plugin can probably now longer use them. So don’t use a to low file size in the filter. On my system 1 MB was even small enough to catch all real database dump but not those special files. The same is true for some XML files mentioned earlier. And you can’t compress logfiles, if you still want to write to them. So don’t just compress every text file that is too large.
Conclusion
Having a lot of projects on needs a lot of disk space. Knowing how to save some of this valuable space is crucial, especially when you can’t simply upgrade the hard drive in your hardware. Compressing all those database dump quickly cleanup up a lot of disk space for me in no time and without the need to decide on every file, weather I still need it or not.
I’ve probably wrote in earlier blog posts, that I usually use PhpStorm when developing websites. Some weeks ago, every time when I have tried to open a Markdown file, I got the following error:
As I’m quite familiar with writing Markdown in normal text editors and don’t need syntax highlighting or the preview, I simply disabled the Markdown plugin.
Today I was working on a WordPress plugin and wanted to separate some text into multiple lines. To do this, you have to add two space to the end of every line. But as soon as I wanted to commit the file, the PhpStorm file cleanup routines removed those “unnecessary whitespaces” at the end of the lines. So I’ve reactivated the Markdown plugin, just to find out, that the issue still exists.
Solving the issue
A quick search for the error message brought me to a support ticket and here I found the hint, that it might have to do with the JRE PhpStorm is using to run. If you have multiple JRE versions installed, you can use a plugin to choose from those. Jetbrains, the company behind PhpStorm recommends to use a special JRE version provided by Jetbrains.
I currenlty run Manjaro Linux on my work laptop, so I searched in the program repositories for that special version and found the following:
After installing the patched JRE version and restarting PhpStorm, I was able to open Markdown files again. When trying to commit the Markdown file, the additional two spaces at the end of the lines were not delete anymore.
Conclusion
Sometimes a quick fix like disabling a plugin for a software seems to solve an issue. But there might be some side effects with that. Trying to dig deeper into the real issue and finding a solution might sometimes take more time, but it’s usually the better solution.
Yesterday, the 8th edition of WordCamp Europe had it’s final day. And it was an amazing experience. I had the pleasure to take part as an organizer, but the result was quite different, that I would have expected.
WordCamp Europe Porto 2020
The journey began 12 month ago in Berlin. At the closing remarks of the WCEU 2019 we announced that we would go to Porto. I commited myself to continue a fourth year in a row as an organizer, taking the role as a Global Lead. But I was not going to do that alone. I was joines by Tess and Jonas. For the first time in WCEU history, we started with three Global Leads. But the team would not be complete without José Freitas the Local Team Lead for Porto.
In September, our Call for Organizers was done, our organizing team of 72 volunteers from the WordPress community was selected and we started organizing an event for around 3500 attendees. This trip to Porto would have also been my first trip Portugal. But then things changed.
The cancelation of WordCamp Asia
The COVID-19 outbreach had just started, but on 12 Februrary, only around a week before the event, WordCamp Asia was postponed to 2021. This was not only a shock for the global WordPress community, but also for us as organizers of WordCamp Europe. Having an event only three and a half months later, we could not see, if we would be able to let it happen. The next weeks were quite intense. Other events were postponed as well, and it was getting clear, that we had to do something, Finally on 12 March, one month after WordCamp Asia, we also had to postpone WordCamp Europe in Porto to 2021. It was one of the hardest decision any organizer of a WordCamp Europe probably ever had to make. But the WordPress community was amazing in supporting our decision.
Starting from stratch
When we decided to postpone the in-person event, we also decided to have an online event insted. Up to this day, no other WordCamp that had been canceled went online. So there was no experience in the community on how to best do an event of our size online. We basically had to stop with the local event and start all over again organizing a very different event.
Although the decision was made by the whole organizing team, not all of us were able to continue helping with the online event for various reasons. While Tess and Jonas couldn’t continue as Global Leads, I was joined by Rocío Valdivia who was previously our mentor. We were joined by another 31 organizers who re-grouped in different teams.
This was the start of the first ever WordCamo Europe Online and a new milestone for the community and many of us.
Three months of passion and hard work
Organizing a WordCamp Europe in nine month is a lot of work. Organizing an online event that size in less then three month, is incredibly harder. With the passion, dedication, hard work and amazing team work, we managed to deliver an event, that can hopefully be an inspiration for other online WordCamps to come.
We have received so much help from other WordCamp organizers along the way. The WordCamp Asia team showed us how to deal with postponing an event, the WordCamp Spain team organized a real community event, that made us change some of our plans, to copy some of the great ideas they came up with. And also other WordCamps around the world had some unique things, we were inspired by.
The biggest Wor(l)dCamp ever
Any organizer who ever had contact with vendors probably experienced, that the name is sometimes misspelled as “WorldCamp”. And this time, it can’t be more true. Although our focus is the European community, we wanted to allow anyone from around the world to participate. So have chosen a time from 15:00 to 20:00 CEST on the three days, allowing Europeans to join in their afternoon, so they don’t necessarily have to take a day of to participate. But this times also allowed many attendees on both sides of the Pacific to attend, either in the early morning or late evening.
The WordCamp in Porto was expecting 3500 tickets to be sold and more than 3000 attending. Last year we have sold more than 3300 tickets, welcomed around 2700 attendees from 90+ different countries. This year we sold 8600 tickets for our live stream and had more than 9000 views to the track 1 session of Friday in the first 24h, with some attendees watching the replay of the stream. But the number that was most impressive: we had signups from 140 different countries, so truely a “WorldCamp”:
An experience I would never had imagined
I’ve attendeed every WordCamp Europe and have been involved in the organizing team since 2017. When we had the event in Berlin last year, it felt quite different. This year it was even more different. Usually I traveled to a foreign country some days before the event and met old and new friends. This year, we all had to attend and organzier from our homes. Everyone was missing the personal contact to other attendees. But the feedback from the community was overwhelming!
A huge thanks to the community!
I cannot put into words how I’m feeling right now. The experineces of this week is still so fresh, that it’s hard to describe. But more than anything else, I’m feeling thankful. For my fellow co-organizers, for the community who attended the talks, participated at the Contributor Day, for the speakers, the emceees and other volunteers, for the sponsors helping the event financially and any other person or organization helping with this event. Even though we didn’t had the chance to meet with friends, we gave so many more people the chance to participate.
See you all in Porto in 2021!
I hope many of you joined us this weekend at WCEU 2020 Online. Next year, we hopefully have the opportunity to finally meet in Porto. I will again take part in the organizing team as one of four Global Leads, joined by Lesley, Taeke and Moncho. If you want to help as an organizer yourself, please apply at our Call for Organizers. And if you just want to attend, you can already grab your ticket.
I always develop sites locally. My local environment is using Docker, which I will probably cover in an upcoming blog post series. Sometimes I’m experiencing really long request times, especially in the WordPress backend. As I also usually use the Query Monitor plugin while developing, I quickly found out that some HTTP requests were failing. Those have been triggered either by core or some premium plugins/theme, usually to check for updates.
Blocking external requests
In order to fix those requests, I’ve deactivated them by removing the filters or actions that triggered those requests. Then I found an easier way by a global filter:
But this was not the easiest or best way, as this will block all requests, no which host they are using. To block only external requests, we can simply set a constant:
define( 'WP_HTTP_BLOCK_EXTERNAL', true );
This will not block any local request, which might be need for cron jobs or other functionalities.
Allowing some external hosts
Sometime you cannot simply block all external requests, as some plugins/theme you develop user external APIs. For those hosts, you can set another constant with a comma separated list of those hosts:
The two constant can really help to block any external HTTP requests triggered by WordPress. This although only work, when the WordPress API functions for HTTP requests are used. If a plugin/theme is using file_get_contents, curl or similar functions directly, it does not block those requests.
It will also not block any request made by the browser. If you want to block those requests as well, you can try out the plugin “Offline Mode” written by Frank Bültge you can download and install from GitHub. This will try to block as many external requests as possible.
Conclusion
Working locally is always the best thing to do, when working on a project. But if you don’t have a stable internet connection (or non, like on a plane), it can slow down your local development environment dramatically. Using these constants can bring back the speed your are used to.
I love reusable blocks! OK, right now everyone is talking about “(Block) Patterns” and they will be amazing. But many still don’t know about reusable blocks or just don’t know, why they are so useful.
What is a reusable block
Well, as the name indicates, it’s a block that can be reused. Anywhere. And as such a block can also be a group block, a resuable block can make a whole set of block reusable. Once you have created/converted one, you can use the same “content” on multiple posts, pages or other post types. In the future also in widgets and any other “block enabled areas”.
But how do you find a reusable block? The easiest way is by clicking on any “Add Block” button (the squares buttons with the plus sign) and then searching for the name or scrolling all the way down:
You can also use the “Search for a block” using the name of the reusable block (in our example “Contact”) or by using the “forward slash search” to quickly find it. New to this view is also the “Manage all reusable block” link at the end, which could previously be found in the “More tools & option” view:
When you click on one of the links, you will be redirected to the “Blocks” overview.
Managing the blocks
The list of block look very much like the listing of any othter post type. Well, that’s not really a big surprise, as reusable blocks are saved as a post type wp_block in the database.
From the management list you can choose “Edit” on an existing block which will open up the block editor where you can change the block and save it. You can also click on “Add new” which will allow you to create a new reusable block without the new to create a regular one in a post or page and then converting them into a reusable block.
But maybe the two most handy things you can to from here is using the “Export as JSON” for individual block or the “Import from JSON”. With these tools, you can simply migrate reusable blocks from one WordPress installation to another.
Bonus
If you pay attention to the last screenshot, you might have recognized that “Blocks” link in the “Tools” section. This would usually not be there. If you navigate to the listing using one of the links, you will end up being “nowhere”. This listing is hidden and you can’t reach it using the left navigation. Before the two links were put to those places, it was even harder to find that listing. That’s why I wrote a small helper plugin adding the “Blocks” link shown in the last screenshot to the “Tools” section. This made it a lot easier for the client to find that list. If you want to add this link yourself, simply download the plugin from GIST and install it.
Conclusion
Even if reusable blocks might not be used as much when we are able to use the new “Patterns”, there are still usecases when we might want to have “synchronized content” on our sites. Knowing how to get out the most of those blocks and where to find them makes working with them a lot easier.
Working on a website, wheater it already went live or is a new project, you should alsways have a copy of the website in staging environment. But you probably don’t want that website being index by search engines. In this blog posts, I will explain some ways to prevent the indexing.
Method 1: Use the default setting
In the dashboard of your website, navigate to “Settings | Reading”. Here you will find the setting “Search Engine Visibility” with a checkbox to “Discourage search engines from indexing this site”. This will dynamically create a robots.txt file to tell seach engines not to index your site. But as the note says “It is up to search engines to honor this request”, so some search engines might just still index the page.
Method 2: Prevent indexing in the server config
The first method usually work pretty good. There is only one big issue. The setting is stored in the database an can easily being overwritten. How? Well, you might want to import the database from the live website at one time, to see the latest content on your staging. Do you always remember to go back to that setting and enable it again on your staging? You might forget it.
Therefore preventing the indexing in the server configuration is a safer way, as it stays, even when importing a database from live. Simply add the following lines to the configuration:
Header set X-Robots-Tag "noindex"
Header merge X-Robots-Tag "noarchive"
You can do that in the .htaccess of the staging website. This is most useful, when you host it on the same server or you cannot change the global configuration. But you have to be careful not to overwrite the .htaccess file with the one from the live server.
If you have a dedicated staging server and you want to index none of the website hosted there, just add the lines to the global configuration, like the /etc/apache2/apache2.conf or a similar file.
Method 3: Use a maintenance plugin
You could also protect your website from search engines and from other users by using a maintenance plugin. This will usually add a password protection to your site or you have to login, in order to see the website’s content. This can also be useful to give other people access (like the client), but not everyone. This method although has the same issue, that once you import the live website, you have to make sure to reactivate this plugin, as the state of activated plugins is also stored in the database.
Method 4: Use “basic access authentication”
With the “basic access authentication” aka. htaccess protection, you can prevent access to the website without the need of a plugin. With an Apache webserver, you can add a few lines to your .htaccess file and any visitor to you page has to enter a username an password. This method is safe againt imported live databases but when you overwrite the .htaccess from the live website, you have to restore the protection again. Therefore you can also store it in your global (per site) server configuration.
Conclusion
There are many different ways to protect your staging website from being indexed. Whatever method you use, always make sure that the method is still working, after you did an import from live. You can remove websites from search engines, but you have to do it per domain and for every single search engine using their individual tools.
In the last blog post I explained how I move a WordPress website to another server. In step 7 I told you, that you have t replace the domain in the database. But this is not the only replacement you might have to do. Today’s blog post will give you some more examples of potential replacements and some other things you have to consider.
Let’s have a look at the most basic replacement again. This command will replace the domain in all tables in the database, with the prefix of the current WordPress installation:
It will handle replacements in simple strings, as well as in serialized PHP objects. You should also replace the path to the current installation, if it differce between the servers:
After you have run the previous commands, the old domain should be replaced in all tables. You can simply check that with another WP-CLI command:
wp db search "https://staging.example.com"
Additional replacements
Unfortunately, many plugins save complex data in other formats. Therefore it’s also advised, to search for the following patterns and replace them, as long as the previous search still returned some values.
Search pattern
Replacement
Comment
http://example.com
https://example.com
New domain without SSL
http://staging.example.com
https://example.com
Old domain without SSL
//staging.example.com
//example.com
protocol-relative paths
https:\/\/staging.example.com
https:\/\/example.com
JSON objects (run this for all previous patterns)
@staging.example.com
@example.com
email domains (optional, as somethimes not wanted)
staging.example.com
example.com
Dangerous!
Replacements for domain changes
There are some variants to consider. For the first four are replacements, you should also “escape” all slashes, as objets might have been stored using a JSON string. Replacing email addresses is optional, as your new system might not use it’s own emails (like when migrating from Live to Staging) or if you don’t want to show them in the frontend with the different domain.
The last replacement seems to be the easiest one, as it would do all the revious replacements in one go. But it could potentially break things, as the domain name might be part of an image name in the media library, which would result in a broken path to that image. So it’s best not just to replace the domain name only.
Additional steps
Themes, plugins or the server could use caches. There are different methods to clean those caches.
Fusion Builder (e.g. Avada Theme)
At “Avada/Themename | Theme Options | Performane | Reset Fusion Caches”, just use the button “Reset Fusion Caches” to remove all files from the cache.
Autoptimize
In the adminbar, you will find an entry “Autoptimize” and at the end of that entry a link named “Delete Cache”.
PageSpeed
To flush the PageSpeed cache, you have to connect to your server and run the following command:
touch /var/cache/pagespeed/cache.flush
It might take some seconds for the cache to be fully flushed. After it’s empty, remove the file: