As of bug 1536174, I've added a mechanism for Firefox that uses an internal mechanism that can record content frames during composition. Note that this currently only works when on Windows using D3D11 composition. This is still in very early stages and will likely be getting some improvements over the longer term, but right now, basically this is how it works:
You will now find a new directory in your working directory called windowrecording-<timestamp> which contains a list of PNGs, this list of PNGs uses the following naming convention:
frame-<framenumber>-<ms since recording start>.png
You will notice only the frames where actual content changes occurred (no scrolling, asynchronous animation, parent process changes, etc.), this will likely become more flexible in the future. The directory where the output is delivered can optionally be selected using the layers.windowrecording.path preference. Use a trailing slash for expected behavior.
There's a couple of caveats when using recording:
Recently I've been working on a script that can use the generated recordings to perform pageload analysis. The scripts can be found here and in this post I will attempt to describe the right way to use them. The scripts, as well as this entire description are meant to be used on windows, so these instructions will not be accurate on other platforms (and in any case, recording doesn't work there yet either, nor will most people have an open source PowerShell implementation installed). This is also in very early stages and not particularly user friendly, you probably shouldn't be using it yet and just skip to the demo video :-).
Prerequisites
Note that visualmetrics.py also expects ffmpeg and the imagemagick binary locations to be included in the path.
The next step is to modify the SetPaths.ps1 script to point to the correct binaries for your system, these are the binaries that the AnalyzeWindowRecording script will use.
Recording Pageload
The next step is recording a pageload. The first step would be to go to 'about:blank' to ensure a solid white reference frame to start from. A current weakness is that the timestamp of the video (and therefore the point in time the analysis will consider the 'beginning' of pageload) is the time when the recording begins, rather than navigation start. Therefore, it is best to navigate immediately after beginning the recording, this could be improved by scripting the recording and navigation start to occur at the same time, but for now we'll assume that a small offset may be considered acceptable.
First start the recording and immediately navigate to the desired page, wait for pageload to finish visually and then end the recording.
Analysis
The final step is to run the analysis script, this is done by executing the script as follows:
.\AnalyzeWindowRecording.ps1 <directory containing recording> <annotated output video>
Note that the script may take a while to execute, particularly on a slower machine. The script will output the recorded FirstVisualChange, LastVisualChange and SpeedIndex to stdout, as well as generate an annotated video that will display information at the stop about the current visual completeness and the different stages of the video.
It is important to note that the script will currently chop off the top 197 pixels of the frames, this was accurate for my recordings but most likely isn't for people recording using different DPI scaling, in the future I will make this parameter configurable or possibly even automatically detected, however for now you will have to manually adjust the $chop variable at the top of the AnalyzeWindowRecording script for your situation.
Finally, I realize these are lengthy instructions and usage of this functionality is currently not for the faint of heart. I wanted to make this information available as quickly as possible though and we expect to be improving on this in the future, let me know in the comments or on IRC if you have any questions or feedback.
First, preparation, make sure you've got the latest version of mercurial using ./mach bootstrap
, when you get to the mercurial configuration wizard, enable all the history editing and evolve extensions, you will need them for this to work.
Now, to go through the commands, first, the basics:
hg qnew
basically just becomes hg ci
we're going to assume none of our commits are necessarily permanent, and we're fine having hidden, dead branches in our repository.
hg qqueue
is largely replaced by hg bookmark
, it allows you to create a new 'bookmarked branch', list the bookmarked branches and which is active. An important difference is that a bookmark describes the tips of the different branches. Making new commits on top of a bookmark will migrate the bookmark along with that commit.
hg up [bookmark name]
will activate a different bookmark, hopping to the tip of that branch.
hg qpop
once you've created a new commit becomes hg prev
an important thing to note is that unlike with qpop, 'tip' will remain the tip of your current bookmark. Note that unlike with qpop, you can 'prev' right past the root of your patch set and through the destination repository, so make sure you're at the right changeset! It's also important to note this deactivates the current bookmark.
Once you've popped all the way to the tree you're working on top of, you can just use hg ci
and hg bookmark
again to start a new bookmarked branch (or queue, if you will).
hg qpush
when you haven't made any changes bascially becomes hg next
, it will take you to the next changeset, if there's multiple branches coming off here, it will offer you a prompt to select which one you'd like to continue on.
Making changes inside a queue
Now this is where it gets a little more complicated, there's essentially two ways one could make changes to an existing queue, first, there is the most common action of changing an existing changeset in the queue, this is fairly straightforward:
In short qpop, make change, qref, qpush
becomes prev, make change, ci --amend, next --evolve
.
The second method to make changes inside a queue is to add a changeset inbetween two changesets already in the queue. In the past this was straightforward, you qpopped, made changes, qnewed, and just happily qpushed the rest of your queue on top of it, the new method is this:
Pushing
Basically hg qfin
is no longer needed, you go to the changeset where you want to push and you can push up until that changeset directly to, for example, central. ./mach try
also seems to work as expected and submits the changeset you're currently at.
Some additional tips
The hg absorb
extension I've found to be quite powerful in combination with the new system, particularly when processing review comments. Essentially you can make a bunch of changes from the tip of your patch queue, execute the command, and based on where the changes are it will attempt to figure out which commits they belong to, and essentially amend these commits with the changes, without you ever having to leave the tip of your branch. This not only takes away a bunch of work, it also means you don't retouch all of the files affected by your queue, greatly reducing rebuild times.
I've found that being able to create additional branching points, or queues, if you will, off some existing work on occasion is a helpful addition to the abilities I had with mercurial queues.
Final Thoughts
In the end I like my versioning system not to get in the way of my work, I'm not necessarily convinced that the benefits outweigh the cost of learning a new system or the slightly more complex actions required for what to me are the more common operations. But with the extensions now available I can keep my workflow mostly the same, with the added benefit of hg absorb
I hope this guide will make the transition easy enough that in the end most of us can be satisfied with the switch.
If I missed something, am in error somewhere, or if a better guide exists out there somewhere (I wasn't able to find one or I wouldn't have written this :-)), do let me know!
]]>Disclaimer: The opinions I will be expressing will be solely my own, they will in no way be related to Mozilla or the work I do there.
This is the second part of a series of post I will be doing on society's current issues, you can find my previous post here: When silence just isn't an option
So, I've heard a lot of people say we now live in a 'post-fact' society. I've found this a very interesting topic recently, since information is so important when making decisions, if 'facts' are no longer important. It seems like that might be a bit of a problem? I will attempt to divulge my thoughts here on what the deal is with facts, what they are, and how we can go (back) to being a 'factual' society.
It should be pretty simple to establish facts, right?
Let me start with a little about what makes a fact, whenever you talk about something like this anyone can come up with quasi-philosophical arguments about how there are no facts. Maybe they're right, our eyes can easily be tricked, maybe the moon isn't there when I'm not looking at it, maybe the sun does have an army of invisible tiny little midgets orbiting it and maybe gravity will suddenly disappear tomorrow. But that is not how we generally live our lives, in general in our day to day lives we take the things for which we have directly observable evidence and say they're facts. We do this for observations (the wall is white, the book is made of paper) and predictions (when I press the light-switch the light will turn on, when I heat up this chicken it will be cooked). Those types of deductions based on our senses and inductions based on past experiences are essential to being able to live our lives, and I really don't want to get into those type of 'fact or no fact' arguments.
Now next to facts we have beliefs. Often a belief is something we have been told many times during our lives, something that we would like to be true, that makes us feel justified or better about ourselves or something we were told by someone we believe in and we can experience them with various degrees of certainty. As humans we act upon our beliefs as much as we act upon facts, you might believe in a deity, you might believe everyone in the world should be kind to one another or you might be an existential nihilists that believes there's no point to life either way. The point is that we all have a large collection of beliefs, and the vast majority of us is naturally uncomfortable changing those beliefs. In order not to be in this position we subject apparent evidence in favor of those beliefs to far less scrutiny before we admit it into our lives, and we reject apparent evidence against those beliefs much more easily than we would reject evidence confirming them.
But the line between those two can become somewhat blurry, essentially we have a spectrum where anything we can say with near certainty is a fact, a large range between those things we are 'certain' are true, and things we are 'certain' not to be true we call beliefs that we do or do not ascribe to, and finally there are things we are certain not to be true. When an individual holds a belief with an extremely high degree of certainty, to them that belief is as much a fact as anything else that most of us might agree on is a fact. It would take a lot of observable evidence to dissuade them from that idea. If you've lived your life believing the universe revolves around the Earth, the notion of that not being true would be extremely hard to accept. Superficial observation does seem to confirm that that is the case, and after all that does seem to be the most likely explanation. It will take a lot of convincing for you to understand that the Earth, in fact rotates around the Sun.
Let's stick with that example for a second. In the modern world, for some of us, the Earth orbiting the sun is a fact, we have looked through a telescope, traced the motions of the sun, the stars and the planets in our sky, and we have observed directly that the universe revolving around the Earth cannot explain our observations, subsequently we have continued to make observations which show the only possible explanation (without going into absurdity) to be the Earth revolving around the Sun. Now the people who have done the observations of this fact, can now produce a set of predictions and descriptions of observations, and as experts, through the system of education, we can share these with as many people as possible. Now if we're in a position where people consider us credible, they will acknowledge our observations and predictions, they won't try to reproduce them themselves, but in the absence of a credible source to the contrary they will believe us. As such most of the western world holds a strong belief in heliocentrism, strong enough for most, to accept this belief as fact.
You end up believing in individuals or institutions to tell you what is or isn't fact. That is essentially a good thing, since none of us would have time in our lives to observe all the evidence for all the things we are told are facts and that are relevant to our modern-day lives. Therefore belief in various forms plays an extremely important and often positive role in our lives. But there is always a danger in believing in ideas, people or institutions. When rapper B.o.B. sent out a number of tweets supposedly proving the Earth was flat [1] the vast majority of people disagreed. They pointed out that although some of his own predictions were right, his observations were lacking. However there were some who saw their own beliefs confirmed in his observations, and rushed to join them, there is actually a society that collects evidence for a flat earth [2], partially grounded in geocentric religious beliefs.
So what's the point? The point is that in the case of any fact for which we do not take the time to observe the evidence ourselves (of which there will always be a lot), whether we accept it as a fact or not depends on the strength of our belief in the source that has collected the evidence for that fact.
Now how do we get to the facts?
Scientists and engineers probably have an above average ability and tendency to try to look at claims from different angles and seek unbiased evidence for those claims. But it's important to remember that even for them, it turns out to be extremely difficult to tune out beliefs and biases, and there exists a long history in science of conducting research focused on confirming views for which there really wasn't that much evidence in the first place. Often, like with normal human beliefs, that path involved taking evidence at face value that we liked, and extensive attempts to debunk evidence that we didn't like.
This continues to the present day, and we all do it! After all how many liberals have really investigated the claims about the crowd size at the inauguration of Donald Trump? Didn't most of those people see the pretty picture on the news and go 'Well see, there's all we need to know!', whereas that picture would not have been particularly hard to manipulate? (SPOILER: The news gave a reasonably realistic idea of crowd size difference, but don't take my word for it![3][4]) How many really went to look at research about crime in Sweden rather than read the article that said 'nah, it isn't true' and then went: 'Ah, I'm sure CNN did their homework!'? (SPOILER: They did [5]) So we can argue that once you've done research into this, it seems certain media outlets turn out to be more reliable than others, but most of us don't really do that. Our environment has told us certain media outlets are reliable, and we take what they say at face value. That really isn't that different from what people are doing that put their faith into media outlets that in practice really do manipulate facts on a large scale, so it would be silly to think that you can just point people at the media outlet you believe to be factual and expect them to accept that information as such.
So how do we convince people the Earth orbits the Sun?
Here is where I believe the internet is actually helping us, and should make the job of autocratic dictators and media outlets with strong political agenda's a lot harder. Unlike as recent as two decades ago, everyone with an internet connection now has direct access to a wealth of open source information. Images, tweets and many other types of data directly from their sources. A single source might be unreliable, but by combining a wide array of sources from facebook, instagram and twitter to public satellite imagery. Several parties (one of the better known being bellingcat [6]) have shown that it's possible to collect information in such a way that stories from media outlets and the government can be corroborated (or debunked!) in an independent way, citing sources that anyone can verify directly from their articles. This wealth of data has made it far easier for anyone to verify the reliability of the media outlets they consume.
In the end, the issue of how much someone can change your beliefs is all about trust. If there's one thing that is almost certainly true, it's that trust in the traditional media outlets is at a historical low point [7], however, a source of optimism is that research indicates that citations, expert opinions, etc. all those things do matter. They do increase the amount of trust people experience towards a source of information and, perhaps more importantly, once people discover a source of information gets their facts wrong, their trust in that source does decrease[8]. I believe there's a lesson in there for those of us, be it journalists or scientists, who are attempting to convene information to the masses, or even those of us simply trying to convene a 'fact' over dinner. Don't simply tell people their wrong, don't simply tell people you have facts that are better than theirs and try to get away with one questionable image to back up your story.
When you want to convince people of your views, dig deep, look for a myriad of different sources that show evidence for your point of view (the 'fact' you are trying to convene, if you will) and never forget to really look for evidence to the contrary. In the worst case you'll discover that you were wrong and you'll have to give up a belief, which although uncomfortable is worthwhile. In the best case you will be much better prepared when someone tries to confront you with their conflicting ideas. Whenever possible communicate with sources that are accessible and understandable for whomever you are trying to reach with your story. Never argue from authority, information has no authorities, only experts. But most importantly never be deceptive when trying to convince someone of your point of view. There is no way you will be able to instantly change someone's beliefs, but a consistent, steady stream of truthful information can erode it.
Nicolaus Copernicus' ideas went against everything people believed in, the government and the church tried to do everything in their power to give people 'alternative facts'. Copernicus made a wealth of observations and predictions, consistently showing the superiority of his ideas[9]. Eventually he would die under house arrest and it would take until 1758 for books describing the heliocentric model to be allowed by the Catholic church. Governments and other institutions holding beliefs which hold back the flow of facts is nothing new in this world, but in the end, no amount of misinformation can stand the test of time when confronted with real facts.
And now we have the internet.
I realize Social Media provide an interesting perspective on the spread of information, I will treat those in a separate article.
[1] http://edition.cnn.com/2016/01/26/entertainment/rapper-bob-earth-flat-theory/index.html
[2] http://theflatearthsociety.org/home/
[3] http://www.factcheck.org/2017/01/the-facts-on-crowd-size/
[4] http://www.usatoday.com/story/news/politics/2017/01/24/fact-check-inauguration-crowd-size/96984496/
[5] https://www.bra.se/bra/bra-in-english/home/crime-and-statistics/rape-and-sex-offences.html
[6] https://www.bellingcat.com/
[7] http://www.gallup.com/poll/185927/americans-trust-media-remains-historical-low.aspx
[8] http://bigstory.ap.org/article/35c595900e0a4ffd99fbdc48a336a6d8/poll-vast-majority-americans-dont-trust-news-media
[9] https://en.wikipedia.org/wiki/Dialogue_Concerning_the_Two_Chief_World_Systems
Today, I will be doing something I didn’t think I would ever be doing. I will start to post publicly on topics that may be considered somewhat political. My personal opinions and thoughts, if you will. This is something I have always said I would never do, once something is written, it is unlikely to ever go away, and may always be associated with you and the work we do. Having said that, I do believe the climate we currently live in has reached a point where I don't think I have any other choice.
Why wouldn't I speak up?
As engineers, scientists, and other types more concerned with what we're building and discovering about the world around us, it seems to only make sense that we wouldn't publicly take a stance that could be considered political. Above all our task is to follow the data, wherever it leads and use that data in various ways to benefit human kind. In this process our personal feelings about topics only pose a threat to our scientific integrity, every one of us is susceptible to biases, and all we can do is try as much as possible to eliminate them from our work. Throwing out a bit of data because it doesn't seem to support the hypothesis you're trying to prove, building something that inherently puts one group at a disadvantage compared to another for personal gain, all those things are fundamental crimes against our professional integrity. And not only that, they will always backfire eventually, but more on this later.
And this is where it all started. If I speak openly about my political opinions, won't the data I present, the things I built, inevitably be viewed as colored by them? I have genuinely seen someone comment on a crash once along the lines of 'Firefox always crashes when I go to conservative websites, it never does when I go to your liberal websites. If you don't stop promoting your liberal agenda I will switch browsers.'. Whether that is true or not (it is not), it was exactly that phenomenon, an organization perceived as liberal produced a product, and people believed that product was inherently designed to put them and their ideas at a disadvantage. Considering the amount of effort I put into letting the data I collect and the things that I produce not be colored by my personal positions, this is something I want to avoid at all costs.
After all, as long as scientists collect a wealth of data, ensure that experiments are reproducible by any group of people, and the knowledge we gather is then used to build things that obviously and visibly work, we don't need to make a public political stance, right? People will look at the data we collect and the things we build, realize that they're good, and be able to make well-informed opinions for themselves, that fit within their ideological views. Since I believe only a tiny percentage of the population is inherently evil, things will then work themselves out just fine, so we're good!
So what changed?
With the tools we're building, giving people the ability to communicate with people from all over the world, people with different ideas and cultural backgrounds, I always believed an atmosphere of compassion, understanding and a desire to help others, whoever they are, would automatically arise. I thought that with the knowledge we were collecting and spreading - about our place in the universe and how small and vulnerable we are on a cosmic scale - we could automatically foster an appreciation of life on this planet, we would cherish it going forward.
I thought we were at a point where history wouldn't repeat itself, where things would only get better from here.
I have now come at a point where I can no longer deny that it seems that I was wrong. Both new and old problems are festering among our species, and we have to find new ways of dealing with them, because the old ones aren't working. As humans we appear to be inherently complacent, particularly when our own livelihood isn't directly threatened, but the time for that is past, we have to change course now, the risk of 'sitting this one out' is simply too great. As engineers and scientists we have given humanity the means to do a tremendous amount of damage, and now we have a role to play in making sure it doesn't.
What are you going to do about it?
It is likely not many people will ever see what I write here, and most likely much less of those people will read anything they didn't already know. However it has occurred to me that if I can say just one sensible thing, give just a couple of people a nudge in the right direction, that may have a trickle down effect that makes something of a difference in the world, and that, I have to try. Perhaps more importantly for me directly it will help me structure my thoughts, give me a place to point to rather than explaining standpoints over and over again, and possibly even get some useful feedback to improve on my own understanding of the world I live in.
And so, I have decided that over the next couple of weeks I will write a couple of posts in which I outline my thoughts on a number of topics that seem to apply to the troubles of the world today. Feel free to disagree with me, but please do so respectfully, and if you want to have a debate, support your argument with (conventional) facts. On most of the topics I will be writing about I will not be an expert, sometimes I will presumably be wrong and say things which aren't true, although I will do my best to include reliable references whenever I can. I'm okay with all of that, because even in the cases where I am, somewhere I might spark a debate, create some more understanding and ever so slightly nudge someone towards my utopian fantasy of a world where we all live together in peace on a planet (or multiple planets) that we care for.
]]>To those that have not given up on reading my blog, I will be giving a talk at FOSDEM 2014 this year about utilizing GPUs to accelerate 2D graphics. It will be at 16:30 in the Mozilla Developer room and it will be fairly technical! For those of you that have enjoyed my other posts on here you might find this interesting as well.
My shortest post ever! Not even any pictures!
]]>Moz2D Stand-Alone Build & Repository
In order to make it easier for both ourselves, partners and potential 3rd parties interested in using Moz2D to build and work with the Moz2D API, I now have a stand-alone repository that contains the Moz2D basic library code, the player and some testing applications. The exact status of this repository is still to be determined, and no automation is applied to it yet, however we are making sure it keeps building and remains up to date! It can be found here. Some basic build instructions are available here, I will try to post more detailed build information, also for the different dependencies such as Cairo and Skia, somewhere in the coming weeks.
Direct2D 1.1 Integration
We've been working towards a Direct2D 1.1 backend in Moz2D. Direct2D 1.1 was included with Windows 8 and is provided as an update to Windows 7 users. It includes a wide range of API additions, several of which give us the ability to implement the Moz2D feature set with reduced and cleaner code. In addition to that some of the API additions should promise some performance improvements. We have refrained from using some of the new APIs in the Direct2D 1.1 implementation as of yet however, since at the moment they seem to increase the complexity without offering the expected performance benefits. As we begin rolling out Direct2D 1.1 integration to actual firefox users I'll attempt to provide more information on what we discover along the way.
Moz2D Recording & Performance Analysis Improvements
Another thing that we've been working on has been increased recording abilities as well as an increased ability to get useful information out of these recordings. Although the priority for this has been relatively low, steady improvements are being made and it has proven to be a large asset in the creation of new Moz2D backends.
First of all it's now possible to create a recording of a specific page using a single command when using Firefox nightly, an example looks like this (assuming 'recording' is a valid, clean profile of firefox):
firefox -no-remote -P recording -recording file://C:\Users\Bas\Dev\mytest.html -recording-output mytest.aer
We're currently still working out some problems which will often cause recordings to be terminated prematurely and therefore come up empty. But this should be fairly easy to fix. Do note this needs to be done on a system where Moz2D content is enabled, for now, this is only windows, although it should soon include OS X as well!
Once the recording is completed several (experimental) possibilities have been added. First of all the Moz2D player has gotten additional features, see a screenshot below for the updated look:
Note that it's now possible to replay the recording with different drawing backends (depending on your build). This easily makes it possible when adding backends to find flaws in new implementations by comparing drawing results with established backends. We've also added the possibility to analyze timings on a per-call basis to find performance bottlenecks for different backends. In addition to that a separate application called 'recordbench' has been added which is in a very early stage, this essentially is a very simple tool that reports timings for a recording across a range of backends. Trough this application we should be able to do automated performance comparisons between different drawing backends.
Finally we've added a micro-benchmark suite that allows us to make detailed performance comparisons on a per-backend basis. The intention of all this being to give us the ability to make well-informed decisions in the future as to what drawing backends to prefer on specific platforms/configurations.
This isn't necessarily all that we've been doing and I'll do my best to provide additional updates in the near future!
]]>On to what this post is all about, a little over two months ago we had the Skia team visit us in the Toronto office. While there they showed us some of the neat things they're doing with Skia, and one of the things they were doing looked pretty cool! Essentially it was a tool that could take a recorded drawing stream and inspect it, it really made me feel like a powerful debugging tool for drawing sequences could help us out as well. We already have some of these tools already in the 3D drawing area (think of OpenGL APITrace or DirectX's PIXWin (soon to be the Visual Studio Graphics Debugger), however the calls seen there can be pretty hard to correlate with 2D drawing instructions due to batching and other mechanisms.
So this is what I started working on. Since Azure didn't have any recording functionality yet I had to add that first, that's mostly done now and that work is landed in nightly. As a result nightly can now produce recorded drawing streams. The way in which these streams are produced is still a little bit constrained, the following limitations apply:
These are issues we aim to fix in the future, some are easier to fix than others, so look for updates on this. For now, if you have a machine that's capable of Direct2D you can add the 'gfx.2d.recording' pref and set it to true to enable it when you're curious about how we're drawing a certain site. You can then go to the site you want to know more about and close the browser. Your working directory will now contain the 'browserrecording.aer' file (rename/move this if you don't want it to be overwritten next run!).
As this file is stored in a binary format, in itself this isn't actually very useful to you! This is why I've also created a simple debugger. This is in the early stages of development, its UI is not great, nor very intuitive, and functionality is limited at this point. Although it should prove sufficient to be useful for debugging certain types of graphics bugs. I built it on top of Qt and the source code is available here. You can currently find a stand-alone build here, and you might also need the new VS2012 redistributables available here. To make it a little easier to explain, let's start with a screenshot of the main interface.
Now I'll proceed to explain what's currently available in the interface, this might be somewhat lengthy, so if you were just here for the cool pictures feel free to browse on! It might also require some very basic Azure knowledge. I should follow up at some point with a blog post explaining some of the basic terminology.
There's currently three distinct areas in the player:
Drawing Events
The Drawing Events area to the left represents the drawing events present in this stream, clicking any drawing event in there will replay drawing to the corresponding point in the stream. Keep in mind here that moving forward in time is faster than moving backward, when moving forward we only need to execute the 'new' drawing commands, whereas when moving backward we need to replay from the start (a drawing command can't really be 'removed'). This might be fixable in the future by storing 'keyframes' every so many drawing events.
In the columns you will see the current drawing event 'ID' (its position in the sequence), the 'object' that drawing call applies to (in case of object creation or destruction the object that was created/destroyed, otherwise the DrawTarget a drawing call executed on), and the type of drawing event.
In the top there's a text input, which allows you to jump directly to a certain drawing event 'ID', as well as back/forward buttons to jump through events you've recently visited, and a dropdown box which allows you to filter on events coming from a specific object.
Objects
This is the list of objects alive at the currently selected point in the drawing stream. Asides from being useful in debugging lifetime issues and showing some (currently very rudimentary) information about the objects listed this can be used to do a number of other things. One of those things is that you can double click several types of objects to open them in the 'View' area, which I'll explain more about later. In addition to that an object can be right-clicked in order to filter the Drawing Events list based on that object.
View
In this area relevant information can be viewed on an object. In the screenshot shown above we're currently showing the DrawTarget where the UI is being drawn at this point in time. We can do the obvious zooming and in and out of DrawTargets and SourceSurfaces. In addition to that for DrawTargets we display the currently set transform to the right, as well as there being a switchable button at the top that will visualize the currently set clip (it will illuminate it in bright red!). This area, just as the objects area is live and what is shown here will update as you navigate through the drawing events.
So that about covers the basics! Under 'Tools' there is an experimental feature called 'redundancy analysis', that is currently very experimental, if you do decide to play with it keep in mind it will corrupt the heap of the player :-)! Once I've worked on this a little more I will talk about it in a follow up blogpost. Again, this is all in a very early stage and suggestions and contributions are all very welcome!
But this is all windows-only :(
That's true! Currently the recording side of things, that is present on mozilla-central is available everywhere if you force Azure content on, on those platforms, however fonts currently do not work outside of windows, and Azure content is not yet ready to be used for browsing yet on non-windows platforms. The player however is currently only available on windows. Although it has been developed based on Qt and should work on other platforms, it uses the Azure stand-alone build which currently only has a Visual Studio project file and only supports Direct2D.
I'm very interested in having the stand-alone library and test-suite supported on other platforms, and as such if you want to help out in this area please let me know! I'll be more than happy to provide more detailed instructions and help out.
That's all for now! I have a lot of other ideas to work out with the new recording system, I'll hopefully be able to cover more of those here in the near future.
]]>Anyone that has ever worked in Mozilla code knows that because of various reasons indentation depth and type are not very consistent across the tree. Some code is 4 spaces indent, other code is 2 spaces indent, and Cairo for example is 4 spaces indent with an 8 space tab-width. Now luckily, most of our files actually have a modeline describing the supposed indentation settings for that file. However Visual Studio has no default way of using this information (or at least, I haven't found it).
After having made another mistake in indentation causing me significant pain by having to go back and delete spaces all over the places I decided to write a little Add-in for Visual Studio that reads the emacs modeline and sets the editor settings accordingly. I added a little easy installer so other people can hopefully use it to make their lives easier as well. Note that after installing 'Tools->Add-in Manager' can easily be used to (temporarily) disable it. Although you can of course also uninstall it. There's a couple of caveats I'd like to share:
Getting Visual Studio to output an installer the way I wanted it (no elevation prompt, etc.) turned out to be pretty painful! So I'm sure the installer isn't very good, but it gets the job done.
You can find the installer here, hopefully it will help you as it helps me! If you have any suggestions let me know.
]]>