The Evolving Newsroom is a series of Q&As with important names in the data journalism field, discussing how the newsroom is evolving to better incorporate data and data-driven journalism. Next, I’ve talked with Aron Pilhofer, Editor of Interactive News at the New York Times.

Ændrew Rininsland: Could you describe a typical workday?

Aron Pilhofer: For me? Lots and lots of meetings. Not so true for the whole group, but for me — lots and lots of meetings.

Q: So you have more of a managerial role?

A: Right.

Q: How then would you describe a work day for a typical journalist on your team?

A: Well, it depends what they’re working on. They may or may not be part of those meetings, they might be writing a heck of a lot of code — they’re obvious writing a heck of a lot more code than I am these days. But I’m not sure there’s such a thing as a typical day — our projects do tend to be very, very different. If we have a project that’s a quick turnaround kind of thing, chances are it will be a different kind of experience than if it were a long turnaround thing, like in elections, where it’s in a constant state of release, revise, try something, tweak it, release it again, see if it works — stuff like that. Conceiving of different elements of this through the arc of a story or event.

Q: Would you say the work your team does is an extension of Computer Assisted Reporting (CAR)?

A: I do think that it is; I know that reasonable people can disagree about this, but I do think that these roles fall very naturally outside of the tradition of the CAR movement from the ‘80s and ‘90s. It goes back before then, but it really wasn’t until ubiquitous personal computer had arrived that it was really cost-effective who could specialize in that.

Q: In what ways has the newsroom adapted to these new methodologies in reporting?

A: Well, I don’t know I would say that — I very much think that’s a worth in progress. I do think we do a lot more web-first type projects than we ever had in the past. I think that some of the projects we’re able to do or conceive of are influenced heavily by the kinds of things we’re now capable of doing, that we weren’t in the past. In the past, we were pretty much shackled to whatever our content management system provided and whatever templates our designers and IT department were able to cram into that. I think now we’ve been able to break free from those templates and do everything from a custom-built interactive for a single event or breaking news to a full-blown website like School Book, which is the most ambitious project we’ve done editorially. The latter was a really cool process, it involved a core group of reporters, editors, my folks, all together around a table, working on a single project. It was really quite interesting to see. I don’t think the ambition to build a sub-branded experimental site like that would have or could have happened if a team like this had not existed.

Q: Why do you say that?

A: I frequently am asked “How technical do journalists need to get? Do they need to code, et cetera?” I don’t think they need to, but I think they need to have an understanding of what’s possible. The problem is, it’s much easier to say than to actually do. To actually understand what’s possible, you have to have at least a basic understanding of how all this stuff works — what are good devices for telling stories, what are bad devices for telling stories? What makes sense? To know that building a sub-branded education site in the way we did is possible, wouldn’t have been possible before. It wouldn’t have even dawned on anybody to try something like that.

Q: Given this, how do other journalists perceive of the work that you and your team do?

A: I think there’s a range. I think there are some people who are indifferent to what we’re doing and there are some people who are really into it. I think it depends — for the most part, digital writ large has been increasing in its importance over the years, and particularly with Jill [Abramson, NYT Executive Editor] taking over as editor, it’s obviously her signature priority. Everyone’s gotten the message that digital is important. It’s still not at the level as the printed newspaper, but it’s certainly closing the gap in terms of how journalists feel what’s a valuable way to spend their time. So, in most cases, journalists are quite receptive when we propose projects and things that are web-first or web-only, but that wasn’t always the case in the past — I think our election coverage absolutely showed that. As an example, we have now a dashboard — think of it as a live-blog on steroids that comingles content for a wide variety of sources into one stream of news. But one thing we’re doing to make it more interactive is we’re fielding questions. For instance, during one of the debates, which might be a two hour debate, readers are asked via Twitter or via a web form to submit questions as the debate’s going on in real time. We’re then taking those questions — in some cases, as many as half a dozen — and reporters who are there just to do that are taking those questions and answering them, basically doing a fact check. That’s something that never in a million years we would have been able to pull of before. Just the commitment to it would not have been there.

Q: Has it been difficult to get buy-in for projects like these?

A: Oh, of course it’s been hard. It’s a huge newsroom, with 1100 people. There are many parts of this newsroom that are dramatically underserved — let’s put it that way. So yeah, it’s been hard at times to get buy-in, but it’s becoming much easier. Especially in the last two years it’s become quite a bit easier.

Q: Any particular idea why that might be the case?

A: Well, I think in part it’s due to the newness of this group — and I’m talking about projects we’re working on, which often break the article template and are done outside of our content management system. These are projects that aren’t necessarily the kinds of things we’ve done before in the sense of taking them outside of the familiar templates we’ve used. It’s just become quite a bit easier because the profile of this group has grown internally quite a bit — we’ve been around and people know what we do. I don’t know whether there’s a single reason; obviously the support from the top has been incredible. I mean, Jill and John [M. Geddes, Managing Editor (Production)] are both totally into what we’re doing, and so was Bill [Keller, Executive Editor 2003-2011)]. All of those things combined make it easier. Plus we have many more people working in this team than we did five years ago. When we started it, it was three people including me. Now it’s 14.

Q: In an article for Idealab you wrote in 2010, you mentioned others in the newsroom viewed you different once you were given the title “Computer Assisting Reporting Specialist”…

A: This is a little bit of an “Inside Baseball” kind of discussion about what do you call people who do what we do, who are clearly not working in the traditional technology environment but are clearly not writing inverted pyramid-style news stories — what do you call them? I think we were struggling internally to find titles for people that made sense. And ultimately we… I don’t know, I think we kind of punted on it. I mean, our deputies are all “Editors” — that’s their title, “Deputy Editor.” And that makes sense, because they’re doing things that editors do. There are folks underneath them who play what might be described in a traditional software environment as an “architect” role, but there’s really not a newsroom analogy to that. “Editor” sounds weird and forced. We’ve struggled with it, trying to find the right terminology to describe who these people are and what they do and so forth. Now, the analogy to CAR is whenever you go from being a journalist who uses data to being a data journalist, people view you differently — very differently. They don’t really see you as a reporter as much as they do as someone who is able to contribute in some very focused ways. That’s not necessarily bad or good, it just means that terminology matters. So, the argument I was making here was the debate about whether or not it’s a good idea to come up with these specialist titles like “Hacker Journalist” or “Programmer Journalist” or whether the title we should be just thinking about is just “Journalist.” It’s a debate for us, it’s a debate for the CAR community and it continues to be an issue that I don’t know whether there will be a lot of resolution to.

Q: How do you perceive the open state of the open data movement, particularly in the US?

A: The open data movement here has been more focused on, as my friends at ScraperWiki are fond of saying, liberating data. Getting it out of the hands of government officials. I think that’s where the open data movement has been less successful — they’ve been largely successful, and I think you could say that data.gov is a success, and I think you can say that part of it, the transparency piece, the Sunlight Labs of the world, have been successful. But I think the problem is that you get the data — then what? Simply putting it into a database or creating a web search isn’t enough. I think the goals of the transparency movement are slightly different — related, but slightly different — from that of a journalist. It’s like a Venn diagram. We all want public officials to give up data, we want them to give up documents, we want openness and transparency. The journalist wants it as a means to an end, whereas in some cases the transparency movement has been more about the data itself being the end. I think that’s where the differences are. I think they’ve been successful here, certainly.

Q: What avenues, within the newsroom, do you see most affecting people moving forward?

A: That’s a good question, actually. Starting with when I started identifying with the CAR community back in the 1980s and early 90s, I think there was an ongoing debate about how widespread these tools and technologies could be — I think there was a bit of a naivete about it at the time. I think that’s changed, somewhat. I think we kind of believed that if we just trained enough people, if we just kept at it long enough, there wouldn’t be a need for specialists like CAR folks, and that reporters would just naturally see the value. If we could just demonstrate the value enough places, at enough times, win enough Pulitzer prizes, everyone would go “A-ha!” and have that head-slap moment where they asked “How did I ever do my job without these tools and technologies and techniques?” I think that has been largely proven to be way off-base. Right now, I think a very, very, very small subset of journalists industry-wide are even capable — I hate to say it, but even interested in many cases — of working with data in even its simplest forms. To me, I find that unbelievable and even tragic. I think, personally, I’ve wondered how you could cover a local government or a school board, how you could do your job, without some basic data skills. Particularly now, when so much government data is so available and so many public records are going electronic. Then, how is widespread what we do? Now you’ve ratcheted it up a whole other level, using data analysis for a purpose of storytelling. In some cases you’ve then added so many layers of complexity… I don’t see many journalists hacking Ruby code in the near future. I just don’t think it’s going to happen. So I’d say my short answer to your short question would be — very few reporters are going to be experiencing this in any sort of meaningful way beyond the conceptual level.

Q: So you see it as continuing more as a kind of specialist function?

A: Yeah. I don’t see that changing any time soon. And part of the reason is that it’s unfortunate but true, but even the simplest web application, you start putting a database behind something, and you put it up on the web, and if it’s not done properly, it will fall apart under even a small amount of stress and traffic. It will completely fall apart. Back in the old CAR days, it didn’t matter how terrible your SQL code was, as long as at the end of the day you got the right answer. If that sucker ran for 15 minutes because you wrote some crazy outer join, didn’t index your tables properly, hadn’t normalized your data — it didn’t matter. So long as you got the right answer. That doesn’t work on the web — there are scalability issues, a whole lot of new variables in the mix. Believe me — I got into this thinking I was very technical, and I was not technical; I just didn’t know any better. It’s a very steep climb.

Q: For people wanting to specialize in CAR, are there any technologies you’d say would be useful to have knowledge of at this stage in the game?

A: Yes and no. I guess it depends. I want to be careful that we’re talking about the same thing — for me, the most important skill to have for any reporter is having some basic data skills. Knowing your way around a spreadsheet is the most fundamentally important thing that any single reporter could learn. Then from there, you start to get slightly diminishing returns. Next on the ladder would be some basic database skills; next on the ladder would be some basic skills in statistics or mapping, GIS. When you have the opportunity to apply those skills… I think there will be fewer and fewer opportunities as you walk up this ladder. Doing some basic programming could be incredibly valuable — but like I said, it very much depends on the situation and the reporter and how motivated they are to use these tools and technologies in their day-to-day reporting. Many reporters don’t see the value, so they’re not going to do it. As a result, I usually answer this question by saying “Excel,” just leaving it at that, thinking Excel might be the gateway drug into these things.

Q: What do you think the reasons are for reporters just not having the interest in CAR?

A: At the very basic level of spreadsheets and database manager, or even Google Fusion tables or something like that encompassing bits of both and some GIS on top of that — the barrier to entry is relatively low. I don’t think it’s a question of technology so much a question of journalists just not seeing the value, that they don’t have to know it so why bother? Or finding some other excuse — to me, that’s what it is. Only when you get to the public facing things does the technology really pose a barrier, where even an inspired, highly-motivated journalist who is a beginning programmer is going to make some fundamental mistakes that could be fatal. Scalability isn’t an issue for a newspaper — once you’ve done your analysis, you write your news story. The printed page scales pretty well. Not so for the web.

The Evolving Newsroom is a series of Q&As with important names in the data journalism field, discussing how the newsroom is evolving to better incorporate data and data-driven journalism. To start things off, I’ve talked to Conrad Quilty-Harper, who is the Interactive News Editor for the Telegraph.

Ændrew Rininsland: You’d written for Engadget before, what was it like going from a technology website to a traditional newspaper?

Conrad Quilty-Harper: Well, I’d worked for Engadget back in 2007, so quite awhile ago. I was working for a lot of blogs, not just Engadget, and then I worked for Mahalo.com, which is an Internet startup.

In terms of differences between those organisations, there isn’t much difference — they’re all very good publications, chasing news and doing it better than other publications. I’ve predominantly worked for online at the Telegraph, so essentially I’ve got the additional thing which is print; we’re a newspaper, and they’re very different beasts, online and print. But in terms of finding a good story, presenting it in an accurate way, there’s difference in workflows and tools we use, but there’s no essential difference in story sense and treating stories and sources — those are pretty universal skills.

Q: Please describe your day, beginning to end.

A: I have kind of a split in types of work that I produce. One day I might be working on one to three one-off graphics, whether that’s charts or tables or some kind of interactive feature that represents the day’s news. We’ll produce a kind of quick-turnaround graphic that will go and amplify those stories, pick out a new angle in it that you can’t tell very well with words, pictures or videos, and is better told in an infographic sense. So there’s that, which is kind of the core activity; on top of that there’s more in-depth projects that might be on a week-long basis. [Someone] was working on a cycle map, so he came to us with this data and we found a story in it and we thought “How could we better represent it?” and that was something we worked on for a week or two before it went live. And then, on top of that, there’s even longer-term projects, which starts to get into the data journalism territory of the Wikileaks project, investigations into government spending. Those are very in-depth, month-long projects. It’s kind of a mix of those three time scales, and depending on what’s in the news, what news you’ve got or what sources you’re working with, then you’ll adapt based on those three things.

In terms of the day-to-day, I’ll do some forward planning looking at the diary for government calendars and what data releases are coming out, what will be on the agenda for tomorrow, what will be on the agenda for the next week, thinking how we can prepare for that. A big part is planning and making sure you know what’s going to happen, or trying to predict what’s going to happen. First thing in the morning you generally think about what graphics we’ve got from the day before that we can finish off before our 12 o’clock conference, and then at 12 o’clock conference you look to all the section editors and the editors themselves and think of more ideas for later on in the day and going into tomorrow. I work with a couple of graphic designers, a dedicated developer and another journalist, and we’ll discuss what we’re going to be working on, or another journalist will provide some data, and we’ll discuss that with the developer and the designer. The designer will tell us how it should work and then the developer will code it, and we’ll work with the developer throughout the process of how it’s going to look. Generally we sit four or five people discussing what’s coming up, what’s happening now, and how can we adapt our tools to fit in order to make a great graphic or advance some kind of story somewhere using data.

Q: How big is your team?

I’m working with a team of four other people directly, which is not at full capacity yet but will be next month. We will be doing the day-to-day stuff, making sure to get the day-to-day graphics up and making sure we have longer term projects to show off our work and working on longer-term investigations and collecting data. I’ve worked with the Lobby team quite closely, there’s about half a dozen journalists I’ve worked with on investigations for front page stories, interactives to go with front-page stories — our team works across the newsroom as well. In a sense, we provide some service to other areas, but I like it better if we’re presented with ideas and we choose the best idea, run with it and turn it into something that suits our medium. We’re not daily newspaper reporters, we don’t run out into the street and find out this information — we will generally digest information and write the right tool that goes with it.

Q: It seems you mainly do online stuff. Do you ever interact with the print side of it?

A: I’ve done many newspaper stories that are newspaper stories first and foremost, traditional scoops. They’ll go up online as well, and vice-versa. There’s no distinction between the two; if there’s a good story in print, it’ll be a good story online, and generally that means you’ll get a good graphic out of it. There’s no real distinction in terms of the medium.

Q: In what ways do your interactions with the news editor and the web editor differentiate?

A: It’s like any other newsroom, really; I’ll come up with a pitch for a story and will present it to a specialist or news editor, and they will make a judgement on it, and that will determine where it goes. It’s no different than any type of story, picture gallery, interview or whatever.

Q: Would you mind describing that workflow a little bit, how it goes from writing to the web?

A: We have a CMS, and I will write the story offline, share it with lawyers if need be, share it with an editor, share it with my team. I might not do all of that, I don’t always show stories to the lawyers or editors; sometimes I’m asked to do stories by editors, so I’ll write with them and work on graphics because of a request, but then I’ll just copy and paste it into the CMS, write the headlines, check that all the SEO terms are done correctly, put the graphic on the article; it’s just like any blog or WordPress. We all self-publish — everyone at the Telegraph self-publishes their work.

Q: So it’s very much a web-first workflow, in that it’s driven to having stories online first.

A: Yes; if it’s a breaking news story we’ll tend to put it online first, because we’re trying to keep ahead of everyone else. Things will get held back for the paper, but those are generally exclusives. If we’re doing something that’s exclusive to our newspaper, we’ll generally let the newspaper announce it and then it will go online, though that’s not a rule. But those decisions are a bit above my pay grade, I’m not one of those editors.

Q: How do other news writers perceive of your work?

A: They generally get lots of benefit out of our work. Our graphics are generally highly supplementary to their stories, you want a good interactive to tell your story better. I’ve only had positive reactions from journalists at the Telegraph to my work.

Q: But do you feel that other writers understand the data-driven aspect of it, and how data feeds into finding stories?

A: Yes, data journalism is not a new thing, reporters have always worked with data and information that’s in a spreadsheet format or a list. If you go back through out archives, we’ve always been working with numbers and data journalism’s not a new thing. The better word I prefer is “data-driven journalism,” so what you mean is I have a data set and I’m trying to find a story from it — the data will be the first thing that brings you to that information. Maybe you find that data and you find a story, but most of the time you’re led to that data and then you’ll find a story in that data. Having interactive graphics and having a team dedicated to processing large sets of tricky-to-process data is something that I’ve always had thanks for — or need for more of — from other reporters. All you need to do is show how easy you can get a scanned PDF into an spreadsheet and write your headline figures — [after the] one time you do that with a reporter, they get it.

Q: In the time you’ve been at the Telegraph, how has its newsroom changed, in relation to new technology or the open data movement? Do you perceive of there being any changes? Is it continual evolutionary process?

A: We’ve continually updated our tools and technology. We’re quite far ahead from where we were 18 months ago, in terms of tools we have available to ourselves and other reporters, in terms of charts and tables and processing data — but we haven’t stopped, we’re continually pushing forward with new tools and new techniques.

Q: In what ways have these new tools informed the reporting that you do?

A: My favourite little piece of software is PDF2XL, which allows you to take scans and turn them into Excel. It’s quite a simple thing if you put it like that, but actually, if you’re under deadline and are trying to get some information up quickly, this is an invaluable tool. It’s something I didn’t have a year ago and it’s enabled us to do a page 4 spread in the newspaper on refused honours — people who have refused honours. That was released by the cabinet office and was in a PDF they’d released with horrible formatting, badly scanned, scratchy image. In half an hour, we were able to take that and put it into a format we were able to put into our CMS for online and make an interactive table. You can search the names, you can see Lucien Freud, you can see Roald Dahl just by typing in “Dahl.” That was one of the key; I’d previously not seen any technology that could do it, but this software does it all. We also have a server that enables us to run PHP code… We’ve got databases now that we didn’t have before, that enable us to run bigger infographics and run our services — our tools and generators. That’s all new stuff.

Q: What’s your favourite part of the job?

A: I like all of it — I enjoy working on groundbreaking stories that people want to read, and also presenting complex information in an easy-to-understand manner. I like the new angles of it — for instance, with budgets, one of my stories was a council credit card spending scandal. They’d spent a hundred million on credit cards, and you can see the kind of colour in the story, but what we were able to do using these tools was take all this data from 200 councils and publish it all online for everyone to see. Which meant you had not only did you have your big impact story, but you also had 200 individual stories that people in their local area could examine and take their local politicians to account for their spending, or even congratulate them for not spending that. The sense of transparency enabled by doing large datasets and making it very contextual for people living in their local areas is something I really get excited about. It’s not only doing a good newspaper story, a good online story — it’s being able to do extra stuff and push it forward a bit further.

Q: With that in mind, what’s your take on opening up the data behind stories?

A: I do it every story, every story you can access our data. We never hide our data. Mainly because, why would we hide it? We want people to read our stuff, why would we hide our information? If people want to know the workings behind our stuff, we’ll always release it. We’ll always release the methodology, so people can hold us to account as well. If we want to hold governments to account and ask them to be more transparent, I don’t see why we shouldn’t be more transparent ourselves.

Q: Is there any instance where somebody’s taken some of your data and done something really impressive with it?

A: One of my favourite examples was the other way around — the Guardian had worked with an academic on the new boundary changes released by the boundary commission as a draft proposal. We were really frustrated, because the boundary commission had not released any interactive, reusable maps — they had released 600 PDFs with individual images of maps, that we couldn’t show to our audience as “This is your local area, this is how your boundaries have changed.” So the Guardian contacted an academic in Sheffield, who managed to figure out how to use the table data and connect that to ward-level areas, and they created an interactive map and put it side-by-side. So what we did was take their data — which we’re allowed to do under the Creative Commons license they release their data under — and we improved on it by linking the two maps together. You had the old map and the new map, and when you moved one map they moved together, when you zoomed in, they both zoomed in. So it was like two sheets of paper together on a desk; that kind of intuitiveness. And the Guardian actually borrowed our design back — in terms of collaboration, that’s the best example of what two newspapers are capable of doing. They sourced the data, we improved on the interaction and provided our input, and they improved their design likewise. There are other examples of going back and forth — we’ve used Guardian data and provided a link back and so on. It’s quite a mutually-beneficially arrangement; if you release the data transparently and have a mind of “Let’s improve this together,” you can do good things.

Q: Given the nature of sharing data and collaborative data, how would you describe your relation with other newsroom data teams?

A: Friendly! (laughs) We constantly take inspiration from other newsrooms. The New York Times is leading the field in this space and they have a massive team of 20-plus people, but we also try to take inspiration from the HTML5 movement as well. There’s quite a big culture of people out there who are just experimenting with new ways of displaying information, and new, open source tools you can use. We use a visual library I think an intern wrote somewhere in Chicago, and it’s where you take map boundaries and make them interactive, so you can click on them and change data. We use that tool and provide help to them, they provide help to us on how to update it and make it work. You’ve got Document Cloud, we’ve given some bug reports to them… There’s a whole bunch of tools out there and different bodies, making the harder tasks easier.

Q: How do you see the current state of open data within the UK?

A: It’s been good progress so far, and I’m really happy to see the kind of progress, and I’ve written about how happy I am with the open data agenda and how pretty much every bit of it has been reached on some level. I’m not happy with certain bits — I wrote an article about how crap the crime mapping has been — but it’s slowly improving, and there’s always more they can release. We have constantly done stories about new data sets that are not on the agenda and there’s always more. There’s never going to be enough. (laughs) The government is running consultations about it and we’ve had a say and given our two cents, but part of our job is to put pressure on the government and different bodies to release this data. You can put it in a way that helps both sides — “If you release this data it helps you look better, you’re more transparent” — and we can get a good graphic out of it, present information to the public and allow other developer to reuse it. It’s all mutually beneficial. Progress has been good so far, but there’s a long way to go, particularly in the mapping government areas.

Q: What’s the most frustrating aspect of your job?

A: Scanned PDFs, or data provided on the back of a piece of paper. Part of my job is about identifying areas that are bottlenecks in terms of data and how can we get access to this data in a structured format so we can create graphics around it or automate activity that people are manually doing. I don’t really find it frustrating, it’s part of my job. Part of my job is to reduce the frustration and do innovative things that were very frustrating.

Q: How do you see the idea of data-driven journalism evolving?

A: We’ve experimented with live data in the past — we did an interactive map of the royal wedding that had people tweeting experiences from the event as it was happening. We’d like to do more in that field. Interactivity in terms of people providing data to us, we’ve been trying these quizzes — we recently did a very successful one on Alzheimer’s, where you could put down if you suspect your relative had started suffering from Alzheimer’s and could put symptoms in, which were provided by a medical body. It had an interactive element; it’s that kind of stuff that interacts with social networks, we’re putting our data from our stories in front of people in a way that’s understandable and readable, making them react to it, and making them take action based on it. ProPublica did an experiment, linking one of their stories about schools to Facebook, so you could see your local area and share stories about your local school. That’s a really agenda-setting kind of experiment and I think we’ll see more of that.

Q: How does social media — particularly Facebook and Twitter — inform the workflow of your day?

A: Very much so. On a very simple level, if people are having technical difficulties with our applications and infographics, we will usually find out from them using Twitter and we can then send them a message saying we’ve fixed it. But also, taking the council credit card example again, that caused a dozen different stories in different areas, so you can track on social media how people are reacting to your story and see follow-up angles. People asking questions about your data or raising an interesting angle you might not have thought of, or put some context to it that makes it a better story — you can always track that. It’s very useful as sentiment tracking for what people are thinking about your work. Also, with the quiz example, we’ve done some quizzes where you can share what’s your virtual diet and how addicted to gadgets you are. At the end, you get a score, saying you’re addicted to gadgets or you don’t use gadgets very much, and users could share that on Twitter and compare with their friends. We’ve had really good success with that and people like those sorts of things. News games using social media might be something we explore as well.

Is WordPress being sofa kingdom by returning Error 404 for custom taxonomies, or doesn’t recognize taxonomy-{name}.php template files in your theme? Does it inexplicably start working if you set permalinks to “default”? Can you likewise find very little documentation as to why this might be the case?

In the hopes this will save somebody a bunch of time — and potential lost sleep, as is my case this morning — I’m posting the awful hack-y solution that made stuff start working. I’m sure this is the wrong way of doing this, and if you have any plugins that rewrite page URLs, it may make them stop working. No warranty at all if this doesn’t work, and I will not provide support for anything related to custom posts/taxonomies (Protip: For projects outside of WordPress’ post/pages/links/media content types, don’t use WordPress — USE DRUPAL.).

Anyhoo, add the following to your theme’s functions.php:

~~~ add_filter(‘init’,‘flushRules’);

// Remember to flush_rules() when adding rules function flushRules(){

global $wp_rewrite;
$wp_rewrite->flush_rules();

} ~~~

Did this help you? Let me know by leaving a comment!

I’ve recently been playing with the semantic web (for the uninitiated, the semantic web is a structuring of web content in terms of what it depicts instead of just a bunch of linked text files) and have come up with the following two queries — let me know if you find these useful!

For SPARQL (I.e., dbpedia), the following should return how many competitors each country is sending to the London 2012 Olympics:

SELECT ?country ?competitors WHERE { ?s foaf:page ?country . ?s rdf:type http://dbpedia.org/ontology/OlympicResult. ?s http://dbpedia.org/property/games “2012”^^http://www.w3.org/2001/XMLSchema#int . ?s dbpprop:competitors ?competitors } order by desc(?competitors)

See results by going here.

Meanwhile, if you want a MQL query (I.e., Freebase), use the following to give a comprehensive array of Golden Raspberry Award “winners”:

~~~

    query: {
          "id": "/en/golden_raspberry_awards",
          "type": "/award/award",
          "category": [{
            "name": null,
            "name!=" : "Razzie Award for Worst Actor of the Decade", 
            "AND:name!=" : "Razzie Award for Worst Actress of the Decade",
            "nominees": [{          
                  "year": null,
                  "award_nominee": [],
                  "nominated_for": [],
                 "sort": "-year"
                }], /* nominees */
            "winners": [{
              "s1:/type/reflect/any_master" : [{
                "type": "/award/award_winner",
                "name": null,
                "key" : [{
                  "namespace" : "/wikipedia/en",
                  "value": null,
                  "limit": 1
                    }] /* key */
                }], /* award_winner */
              "s2:/type/reflect/any_master" : [{
                "type": "/award/award_winning_work",
                "name": null,
                }] /* award_winning_work */
            }] /* winners */
        }] /* category */
    }  /* query */      

~~~

Output here.

I’ll be writing a fairly comprehensive blog tutorial on this sometime in the next few weeks; follow me on Twitter for updates.