Beware of FirstOrDefault() when using sqlite-net

Overall, using sqlite under Windows 8 has been a neat experience. Stuff seems to work, which is nice.

There are a couple of things that were a surprise to me though – the last of them being that FirstOrDefault() does not behave as it should (or at least as I would have expected it to) in sqlite-net.

Background

If you use LINQ for accessing .NET collections, by and large, calling FirstOrDefault() (or First()) with a lambda expression is a very convenient way of finding elements inside them.

The magic though happens when you make a call to FirstOrDefault(lambda) on a LINQ collection that supports IQueryable – these guys have a chance of further optimizing your call.

For example. If you have a List<Person> and you want to find the first person whose Id is equal to 10, you can run the following: list.FirstOrDefault(x => x.Id == 10). When used on a collection, this is just like doing a foreach and looking for the relevant item through the entire collection. There’s no magic happening here other than the berevity of the syntax.

However, if you use a LINQ SQL Server connection and make the same call off the People table, for example (connection.People.FirstOrDefault(x => x.Id == 10)), some magic does happen.  The library will translate your lambda (and the call itself) into a SQL statement that will execute against the engine. That means that if the People table is indexed by Id, the call will return the desired results very quickly – without actually having your code run over all the items. Similarly, when using Where(), the database engine is the entity that actually returns the relevant results – not your code actually iterating over the entire table.

The surprise

Sqlite-net is a set of wrappers that lets you use sqlite through LINQ and LINQ-like mechanisms. The implementation is far from complete and is missing a lot of bits (joins are not supported through LINQ for example).

However, the thing that got me was the fact that FirstOrDefault() behaves very differently from Where().

With Where(), sqlite-net will behave as expected – crafting a WHERE statement behind the scenes (though sadly you don’t get access to it) and executing it against the sqlite engine. The same does not happen with FirstOrDefault() though – instead of creating a SQL statement, the library actually realizes the entire table (bringing it in it’s entirety into memory) and actually runs the lambda on each element.

For small tables, this is really not a big deal – sqlite is fairly fast. However, for big tables, this can be pretty bad. In my tests there’s a 100x cost of using the “wrong” way:

18ms: table.Where(x => x.Id == 30000).FirstOrDefault()

2200ms: table.FirstOrDefault(x => x.Id == 30000);

Conclusion

Be careful when using 3rd party libraries that are not fully developed (duh!). And specifically, in sqlite-net, do not call FirstOrDefault() to get at a specific record in a table. Use Where() instead.

Posted in Dev, Windows8 | Leave a comment

Memory leak in BitmapEncoder.GetPropertiesAsync and how to deal with it

Accessing photo metadata with the Windows Runtime (WinRT) is a fairly simple matter. StorageFile contains functions for accessing various types of properties – photo-related ones included.

Some properties, however, need to be accessed a different manner – specifically, the XMP metadata by Adobe, cannot be accessed directly from StorageFile, but by using the BitmapDecoder class. This metadata area is important because it’s what companies such as Google (via Picasa) and Microsoft use to embed rich  inside photos. Using it, once you figure out the syntax, is actually fairly easy. You instantiate a BitmapDecoder with the stream of a file and you call GetPropertiesAsync() on it with the appropriate XML path (similar to XPath, but not exactly… :))

This is what the call looks like, for example, to get the collection of Microsoft specific facial recognition rectangles (as put there by tools like Live Photo Gallery) inside a photo:

var decoder = await BitmapDecoder.CreateAsync(stream);
var props = await decoder.BitmapProperties.GetPropertiesAsync(new string[]
{
@"/xmp/RegionInfo/Regions"
});

Once you get the regions, you can use the returned value to iterate over the contained XML elements and extract the information you want from them.

That’s all fine and good.. There is one catch though..

The memory leak

When you use this mechanism, you will find that you are leaking a little less than 20k per BitmapEncoder. That means that if you expect your user to only open a handful of photos and do stuff with them, you can safely ignore the rest of this post. At worst, your app will leak a few hundred Ks and next time it restarts all will be fun and dandy..

However, if you want to scan an arbitrary collection of photos, you will quickly run out of memory – scanning 2000 photos will leak about 150-170mb of memory.

The workaround

Working around the issue requires you to do two things. One obvious, the other less so..

First the obvious, you need to access direct primitive values – doing that seems to remove some of the leak present. This is done in a fashion very similar to the previous calls. The only catch is that you cannot really interrogate the XML for it’s structure (since you cannot get it) and so you need to make some assumptions and look for completion in other fashions. For example, to get the person name and face rectangle of the first person from the Microsoft XMP extensions, you need to make the following call:

var decoder = await BitmapDecoder.CreateAsync(stream);
var strings = new string[]
{
@"/xmp/RegionInfo/Regions/{ulong=0}/PersonDisplayName",
@"/xmp/RegionInfo/Regions/{ulong=0}/Rectangle"
};
var props = await decoder.BitmapProperties.GetPropertiesAsync(strings);
BitmapTypedValue val;
props.TryGetValue(strings[0], out val);
props.TryGetValue(strings[1], out val);

In this fashion, you will get the two primitive values (both strings) into the BitmapTypedValue and you can use them to determine what the name of the first person is and where they are located on the photo.

Now, to iterate on all the people in the photo, you will need to have a running counter starting out from 0 and make these calls, each time incrementing the counter by one. When the call to prop.TryGetValue() returns false, you will know that there are no more faces to be had and you can stop iterating.

This, however, does not solve the leak. It makes it less pronounced – at about 4k per call. Much better, but still not great if you want to scan 50,000 photos.

To solve the leak completely (or at least, to such a degree that scanning 50,000 photos will have no discernable effect), you need to do something weird. You need to wrap the stream returned to you from StorageFile by a stream of your making. The stream you create simply needs to forward the calls to the original stream. This is what the wrapper looks like:

class RandomAccessStreamWrapper : IDisposable, IRandomAccessStream
{
private IRandomAccessStream m_stream;
public RandomAccessStreamWrapper(IRandomAccessStream stream)
{
m_stream = stream;
}

public void Dispose()
{
if (m_stream != null)
{
m_stream.Dispose();
m_stream = null;
}
}

public bool CanRead
{
get { return m_stream.CanRead; }
}

public bool CanWrite
{
get { return m_stream.CanWrite; }
}

public IRandomAccessStream CloneStream()
{
return new RandomAccessStreamWrapper(m_stream.CloneStream());
}

public IInputStream GetInputStreamAt(ulong position)
{
return m_stream.GetInputStreamAt(position);
}

public IOutputStream GetOutputStreamAt(ulong position)
{
return m_stream.GetOutputStreamAt(position);
}

public ulong Position
{
get { return m_stream.Position; }
}

public void Seek(ulong position)
{
m_stream.Seek(position);
}

public ulong Size
{
get
{
return m_stream.Size;
}
set
{
m_stream.Size = value;
}
}

public Windows.Foundation.IAsyncOperationWithProgress<IBuffer, uint> ReadAsync(IBuffer buffer, uint count, InputStreamOptions options)
{
return m_stream.ReadAsync(buffer, count, options);
}

public Windows.Foundation.IAsyncOperation<bool> FlushAsync()
{
return m_stream.FlushAsync();
}

public Windows.Foundation.IAsyncOperationWithProgress<uint, uint> WriteAsync(IBuffer buffer)
{
return WriteAsync(buffer);
}
}

As you can see – it has no logic of its own other than the construct/dispose – all the actual interface calls simply call into the m_stream field.

Once you pass this stream into BitmapDecoder.CreateAsync() you will see the leaks as good as go away.

Enjoy.

Posted in Dev, Windows8 | Tagged , , , , , , | Leave a comment

Five concepts that completely changed how I write code

(Note that the dates in this post are approximate and are when I discovered said concepts – they may have been around for a while by then, but it may have taken me time to learn about them)

Every developer has these “a-ha” moments where he or she learns something new. Something that completely changes how they perceive code, architecture or design. I am not talking about the “important” bits of realizing that it’s more important for your code to be readable than for it to be “witty”. I am also not talking about the moment you realize how critical it is for your code to be testable or scalable or maintainable.  Each developer has these.

My path was heavily influenced by Microsoft technologies it shows in my five. I am sure other people have their own five that are influenced by other paths.

I am talking about those moments in time when you learn of a new language, a new architectural element or a new language feature that completely changes how you think about the pure code you write or about how you design it. Often these things go hand in hand with the other important things in coding (maintainability, testability and all those other really important but sometimes less than exciting things). These are those “a-ha” moments where you think to yourself “this changes everything!!!” and you are actually excited about it. Can’t-sleep-because-I-am-thinking-about it excited. So excited that you go back and change existing code you have already written – even though deep deep inside you know that’s probably not the best use of your time. Even though you know you are going to be struggling to get the code back to where it was before you decided to futz with it. But you just cannot stand to see your precious code standing there wearing last year’s fashion. Crying. In the rain. While its mascara runs in black rivulets down its white pasty cheeks.  I may have pushed the metaphor a little. My point stands though.

I am talking about GOSUB.

1985 – The year of the GOSUB

I got my first computer when I was about 7 and started learning BASIC. It was the most fun I’d ever had in my short short life. I had some great booklets that I went through one by one. They taught me a lot – from how basic code flow works (in BASIC :)) to how to POKE around and change the character set to do your bidding. When I was around 10 however, I stumbled onto GOSUB.  GOSUB is GOTO’s smarter, more civil relative. In BASIC, you would GOSUB to a line (GOSUB 1000), moving the executing line (just like GOTO), but when calling RETURN, the executing line would return to right after the line that called GOSUB.

It changed everything for me.

I could suddenly reuse code. My mind was blown. I could literally write pieces of code that would do something and MOVE CONTROL BACK TO THE CALLER.

It was amazing. I started using variables to parameterize what these calls did. I used GOSUB all over the place. I had little pieces of paper that would keep track of what lines did what. This is when I actually started using REM (comments) quite a bit so that I could remember what variables each GOSUB was using.

1991 – The year of Objects

In ‘91 (I was about 16) I already had a PC (I believe it was a 386, but I am not 100% positive) and I was knee deep in Pascal. Being able to write code that would compile into an EXE w/o having to use an interpreter was great. I had these massive Borland Turbo Pascal books my mom somehow found and brought to the house. These books explained this concept of Object Oriented programming which I was having a hard time grocking. It seemed very similar to stuff I was already doing. And I just didn’t get what the point was. Everything it was showing me I could already do with Pascal RECORDS. Big deal.

Until one day I was lying in bed, again reading the book and it “clicked”. It made everything SO MUCH EASIER. Stuff I’ve been struggling the get working just happened magically. I no longer had to maintain tables of function and procedure pointers with that awful Pascal syntax.

It changed everything for me.

I used objects for EVERYTHING. Whether it was warranted or not. Operator overloading? Don’t mind if I do! I calmed down over the years… But man oh man. My code never again looked the same.

Later on, when I got on C++ and learned how C++ did OO, I was again blown away, but not like that day when I was in bed and had that “click”.

1996 – The year of COM

By 1996 I was in the army, writing code like a grown up. Creating software people were actually using. It was exhilarating. I have been using OLE and COM for a year or two by then, but more out of necessity than anything else. However, that’s around when VB4 came along and allowed you to create ActiveX’s (or OCXs back then). And suddenly…. Everything changed again. I could write pieces of reusable code and fairly easily utilize them across language boundaries.

Now.. I know what you are thinking… DLLs..

Not the same thing.. While DLLs were certainly usable for just this for a long while by then, they always felt so kludgy. There were no clear rules on how to use exported functions. There was no concept of objects. There was no “discovery”. COM changed all that. It allowed you to define a set of interfaces and let languages as low level as C and as high-level as Access or VB5 use them. Without having to figure out if a string was passed one way or another.

It changed everything for me.

I started designing interfaces to components – even for things I was sure would never have to be used across technology boundaries. Because they might. One day. And while I was a little sad that COM was not as rich as C++ Objects, I understood the trade offs. It was great.

2002 – The year of .NET

By 2002,  Java has already been out for 7 years or so. It has been heavily used for at least 4-5 years when I finally joined the managed languages bandwagon. Very late to that particular party.

When I first encountered Java back in 1999 or so, I think I “got it”. I got how it can be transformative and I also understood that it would change a lot in the industry. However, because of the particular work I was doing, it was never “good enough” at that stage to use. The UI it produced was terrible and I ended up spending more time on Borland Builder and then MFC/ATL than on Java. And so I came in pretty late to the managed scene.

When I started using .NET 1.0 for a server project (and then very quickly to 1.1), I again was blown away.

It changed everything for me.

I was able to write code so much faster. I was spending more time on stuff I wanted to do and less time on stuff I had to do. It was exhilarating. I always thought of myself as a pretty decent C++ developer and knew I could get X amount of work done fairly quickly. However, with C# and .NET (the combination of a managed language and a very rich set of well defined libraries), I could get the same X amount of work done twice, three times and in some cases 10x faster than I ever could with C++.

I still used C++ regularly, but not nearly as often. I would use it only when I absolutely had to – either due to performance (which was rarely the case) or because of some tech that was not really accessible in managed.

2012 – The year of Async

In 2012, with .NET 4.5, Microsoft introduced new asynchronous capabilities into some of the .NET languages. It took some existing concepts and added language support (what people like calling syntactic sugar) to it to make something… Wonderful…

It changed everything for me.

Once I started using these new capabilities, my code transformed completely. This manifested in two ways.

First, code that’s traditionally considered asynchronous (IO such as network or file management) as well as just plain old parallelization techniques became incredibly easy to write, understand and debug. Non trivial IO code shrunk by a factor of 10. And was much cleaner and easier to recompose or reuse.

Second, things that I didn’t originally think of as asynchronous (even though they totally were) took on another form. Things like animation and event sequences were suddenly cast in a completely different light and were utilizable in more ways.

I made the same mistake I did with OO and with COM and to a lesser extent with GOSUB – I overused it initially. But only because it was so much fun.

Easy but magical

In thinking about these five transformations in how I coded, it dawned on me that with one exception, there’s a very clear link between all of them.

They are trivial. But they are revolutionary (at least for me)

Given a day or a week, I could whip up a simple implementation of GOSUB, of OO, of COM or of Async. When you take the concept and hold it very close to your eyes, it is incredibly clear what’s going on. You can understand all the moving parts. It is when you hold it at arms length that it makes everything magical. Managed code is the odd man out here and is far from being truly trivial to implement.

The next five

In thinking about what the “next five” would be to make this list of mine, I am trying to think about what I spend most of my time today when doing actual coding. While I have been doing multithreaded development for a very long while now (15 years maybe), I am far from producing bug-free code quickly. I still find myself placing locks incorrectly, or completely forgetting to protect elements as well as sometimes getting flow incorrectly. I believe that something like a more restrictive threading model (which does not suffer from perf loss) would probably be one of the next a-ha moments for me. Similarly, I still struggle with correctly writing error handling code. I always miss an exception, or misinterpret it.. It can get quite annoying. So maybe something in that area

However, off the top of my head, I can’t think of many things today that I find to be so cumbersome that I would need them replaced. But we’ll see. Maybe in 20 years or so I will post “Ten concepts that completely changed how I write code.” :)

What are your five?

I would love to hear what your five (or four or six) transformative “a-ha” moments were.


Honorable mentions

I have been thinking on whether or not I should include Lambdas and Linq in here as another “a-ha” moment. I opted to not do that simply because I feel that, while I rely on them heavily in day to day coding and had they not been around, I would feel the pain, they have not really changed how I write code. They mostly take code that I would have already written one way or another and had made it very compact and somewhat more readable. They don’t really change flow or design.

Posted in Uncategorized | Leave a comment

How to use CreateStreamedFileAsync

The StorageFile.CreateStreamedFileAsync is a pretty neat mechanism that allows you to take a stream and use it like an IStorageFile. This is useful in a number of scenarios – especially ones where you are communicating with the OS or other apps (such as when using the share charm) and the “currency” used are StorageFile objects.

One thing you could do is simply write your streams into temporary files and hand those temporary files when needed. There’s a bunch of unnecessary IO happening in these cases – you write to disk and another app reads from disk where all you really want to do is straight up pass a stream.

Enter CreateStreamedFileAsync

I needed to use this method for an app, but something felt fishy about it.. Here’s the signature of the method:

public static IAsyncOperation<StorageFile> CreateStreamedFileAsync(
string displayNameWithExtension,
StreamedFileDataRequestedHandler dataRequested,
IRandomAccessStreamReference thumbnail
)

And the signature for the StreamedFileDataRequestedHandler callback:

public delegate void StreamedFileDataRequestedHandler(
StreamedFileDataRequest stream
)

The idea is simple – you call the CreateStreamedFileAsync() method and it returns a StorageFile object which is not fully realized yet.

When the caller tries to access the StorageFile,  it will activate the callback that will take a  StreamedFileDataRequest instance which will be used provide the content of the file.

But here lies the rub… There’s no obvious “completion” semantics for the call. The class passed in has no “complete” method like other mechanisms in WinRT seem to have. When called, how will the OS know that the file completed streaming?

I searched the web, StackOverflow, MS forums and the like. While there are a few people who inquire about it, there’s no clear answer. I have not been able to find any answer. And MSDN does not have any examples on how to use this.

It took me longer than I’d care to admit, but it finally dawned on me that there is indeed a completion semantics here – each stream can be closed (or disposed) and that can be used as the explicit completion semantics. A few minutes of work and what’d’ya know? That’s how you are supposed to use it!

The  code

Here’s what the code looks like. First, the code that creates and uses the StorageFile instance (note that I am using the most simplistic example I could think of – an example that doesn’t really need to use this mechanism but requires very little code):

private async void Button_Click(object sender, RoutedEventArgs e)
{
var file = await StorageFile.CreateStreamedFileAsync("file.txt", FileGrabber, null);
FileSavePicker picker = new FileSavePicker();
picker.DefaultFileExtension = ".txt";
picker.FileTypeChoices.Add("Text", new string[] { ".txt" });
var target = await picker.PickSaveFileAsync();
await file.CopyAndReplaceAsync(target);
}

The first call we make will create the streamed file, passing in the FileGrabber callback (shown below). We then use a picker to prompt the user for a file to save and we copy our streamed file into the picked file.

Here’s the interesting bit:

private async void FileGrabber(StreamedFileDataRequest request)
{
try
{
using (var stream = request.AsStreamForWrite())
using (var streamWriter = new StreamWriter(stream))
{
for (int i = 0; i < 5000; i++)
{
await streamWriter.WriteLineAsync("This should be the contents of the file.");
}
}
request.Dispose();
}
catch (Exception ex)
{
request.FailAndClose(StreamedFileFailureMode.Incomplete);
}
}

The call takes a StreamFileDataRequest which is an IOutputStream derivative (among others).

What we do then is as follows:

  1. We transform the stream into a .NET stream (by calling .AsStreamForWrite()).
  2. We create a writer on top of it.
  3. We then write into the stream we got from the OS the content we want saved.
  4. Then – when we are done, we Dispose() of the request which will signal the OS that we are done (this bit is important since, as you can see, we are awaiting for WriteLineAsync() to complete – if there was no completion semantics, the OS would think it was done as soon as that first await was hit.
  5. On failure, we make sure to call FailAndClose().

That’s it pretty much! Hopefully you find this and it saves you a bit of time.

Posted in Dev, Windows8 | Tagged , , | Leave a comment

Easy resource acquisition/disposal with yield returns

I love C#’s using keyword for quick resource acquisition/disposal. Together with foreach, it’s one of the first syntactical sugar pieces baked into C# and it’s a boon for keeping code tight, readable and maintainable.

On top of the classic usages (such as handling file streams where it’s paramount for the stream to be closed when it’s no longer needed), a second usage has quickly emerged early on (and in fact has been used for decades in similar fashions in other languages) and that’s simply defining the scope of operation pairs. A good example is transactions. In many libraries, you can use special classes whose sole purpose is to start a transaction when they construct and close it when the instance destructs or disposes.

How it works today

Another good example is logging of timing related information. One can wrap a piece of code in a using statement that will log the amount of time the code inside the scope ran for. For this, we’ll need to have a class that implements IDisposable and that calculates the time it took between the time it got constructed and the time it got disposed. A class like that may look a little like this:

private class TimeItObject : IDisposable
{
Stopwatch m_sw;
string m_name;
public TimeItObject(string name)
{
m_sw = Stopwatch.StartNew();
m_name = name;
}

public void Dispose()
{
Debug.WriteLine("Timing for {0} took {1}ms", m_name, m_sw.ElapsedMilliseconds);
m_sw.Stop();
}
}

To use it, the developer writes the following code:
using (new TimeItObject(“My scope”))
{
// Do some stuff here.
}
This is all great, but the class itself is fairly verbose. One needs to maintain members, initialize them, and in some cases assign them from the constructor to the members.

yield return to the rescue. Again.

I simply love yield return. I think it’s one of the most versatile keywords that exists in c# – especially because collections are such a pervasive feature in the language.

yield return has been used as a poor-man’s replacement for today’s async functionality, it can be used as a state machine. It’s incredibly powerful.

In this case, we’ll create a helper class called DisposableEnumerable. This will take an IEnumerable interface and turn it into something that can be used inside using() with minimal effort.

Here’s the class:

public class DisposableEnumerable<T> : IDisposable
{
IEnumerable<T> m_enumerable;
IEnumerator<T> m_enumerator;
T m_result;

public DisposableEnumerable(IEnumerable<T> enumerable)
{
m_enumerable = enumerable;
m_enumerator = enumerable.GetEnumerator();
m_enumerator.MoveNext();
}

public T Result
{
get { return m_result; }
}

public void Dispose()
{
m_enumerator.MoveNext();
}
}

The class is fairly straightforward. It takes an IEnumerable<> as an argument and immediately makes sure to move to the first item (the call to MoveNext). When disposed, the class again calls MoveNext().
 
On top of that, we’ll introduce an extension method that constructs one of these guys from an IEnumerable:
public static class Extensions
{
public static DisposableEnumerable<T> AsDisposable<T>(this IEnumerable<T> self)
{
return new DisposableEnumerable<T>(self);
}
}

 

So how do you use it?

Using this mechanism is fairly simple. Instead of writing the class as we did above, we simply write a yield return function like this:
public static IEnumerable<object> TimeIt(string name)
{
var sw = Stopwatch.StartNew();
yield return null; // Pivot point
Debug.WriteLine("Timing for {0} took {1}ms", name, sw.ElapsedMilliseconds);
}

The code is divided into two parts – the before yield return portion which will run as part of the constructor and the after yield return portion which will run as part of Dispose(). The fun bit here is that we don’t need to define members, remember passed parameters or anything – the c# compiler will do all that work for us.
 
And this is how the function is used (very similar to the original code – instead of “new”, we call AsDisposable()).
using (TimeIt("My scope").AsDisposable())
{
// Do something
}

When the using code is generated, a call to the constructor will be made, essentially making the TimeIt method execute until the first yield return . When using exits, it will call .Dispose() on the class which in turn will call MoveNext() which will call the bit that’s after the yield return.

Diminishing returns

The more complicated your resource acquisition helper, the less reason you have to use this method since the amount of code boiler-plate or otherwise will converge and the complexity of your yield return function will become bigger. This mechanism or trick is especially useful for cases where you have very simple resource acquisition/disposal you want to implement.

 
 
Posted in Dev | Tagged , , | Leave a comment

How to await a CancellationToken

Cancellation tokens are a way for asynchronous operations to allow the caller to cancel them. The mechanism is pretty damn robust – the same token can be propagated along multiple nested operations such that a cancellation of the top-level object will cancel whichever operation happens to be running right now.

Sometimes, when implementing your own Task returning method, you may find yourself wanting to simply wait for a cancellation token to “pop”. One example for this is if as part of your asynchronous operation you are waiting for user input, one option of which is to cancel the operation.

Surprisingly (for me), a cancellation token does not seem to have an awaitable entry point. A small extension function though and you can easily do it. The bulk of the code (the AsyncManualResetEvent) is actually not part of this blog, but exists in the very excellent blog by Stephen Toub.

Once you have access to AsyncManualResetEvent, simply add this extension method to your project and you are good to go:

public static Task AsAwaitable(this CancellationToken token)
{
var ev = new AsyncManualResetEvent();
token.Register(() => ev.Set());
return ev.WaitAsync();
}

Posted in Dev, Windows8 | Leave a comment

Awaitable Managed Animation classes for WinRT (Part 4)

So far in this post series, I showed how to use code and the new async patterns of WinRT to simplify the animation of elements. As an example, I used opacity, showing how you can code your animation dynamically.

In this post, I’ll show how to move elements about, use translations on them etc. The download link at the bottom of the post contains the sample app and the library used to do the animations.

Let’s make it pop!

First off, let’s change how the squares we had in the previous post appear. Instead of fading them in slowly, let’s make them pop.

That’s usually achieved in XAML by either using projection and making the objects “far” down on the page (projecting the distance on the Z axis) or by simply scaling the element using a ScaleTransform or a CompositeTransform. Since WinRT has a bunch of bugs with the projection transformation, we’ll use the scale one.

The code will be very similar to the staggered fade in we did in the previous blog post. It’s located in the PopPage page. First off, we set up the UI elements much like we did in the previous post. But instead of setting opacity to 0 to start off, we’ll set up the scale to 0. Note that EnsureRenderTransform() is simply an extension method that makes sure the relevant transform class is set on the element.

var transform = control.EnsureRenderTransform<CompositeTransform>();
transform.ScaleY = transform.ScaleX = 0;

 

Now, for our staggered animation of scaling the elements:

private async Task PopAnimationAsync()
{
TimeSpan duration = TimeSpan.FromSeconds(1);
List<Task> tasks = new List<Task>();
foreach (var item in canvas.Children.OfType<FrameworkElement>())
{
tasks.Add(new ScaleTransformManagedAnimation(1, duration).Activate(item));
await Task.Delay(300);
}

await Task.WhenAll(tasks.ToArray());
}

So far, this code is very similar to the opacity animation code, but it uses the ScaleTransformManagedAnimation class to do the animation – we scale the animation to 1 to see the object at it’s full size.

Only problem is… This doesn’t really POP! It just grows and gets to the size… Mundane. Boring. Vanilla. Pedestrian.

What we need is an easing function! Most managed animation classes can take an easing function (or two) as arguments. So let’s do that (showing only the changed line – the one that activates the animation):

tasks.Add(new ScaleTransformManagedAnimation(1, duration, ManagedAnimation.Elastic1Out).Activate(item));

Pop!! The animation has this satisfying quality to it now where the elements “pop” into existence one after another. You can play with the timing and easing function to get to exactly what you want.

As you can see, we simply added another argument to the constructor – ManagedAnimation.Elastic1Out. This makes the Storyboard use the elastic function to calculate the animation. You can pass in any easing function you want – the ManagedAnimation class comes with some predefined ones that seem to be pleasing for certain cases:

public static readonly EasingFunctionBase DefaultOut = new QuadraticEase() { EasingMode = EasingMode.EaseOut };
public static readonly EasingFunctionBase DefaultIn = new QuadraticEase() { EasingMode = EasingMode.EaseIn };
public static readonly EasingFunctionBase DefaultInOut = new QuadraticEase() { EasingMode = EasingMode.EaseInOut };
public static readonly EasingFunctionBase Elastic1Out = new ElasticEase() { EasingMode = EasingMode.EaseOut, Oscillations = 1, Springiness = 3 };
public static readonly EasingFunctionBase Elastic1In = new ElasticEase() { EasingMode = EasingMode.EaseIn, Oscillations = 1, Springiness = 3 };
public static readonly EasingFunctionBase Elastic3Out = new ElasticEase() { EasingMode = EasingMode.EaseOut, Oscillations = 3, Springiness = 3 };
public static readonly EasingFunctionBase Elastic3In = new ElasticEase() { EasingMode = EasingMode.EaseIn, Oscillations = 3, Springiness = 3 };
public static readonly EasingFunctionBase Bounce1Out = new BounceEase() { EasingMode = EasingMode.EaseOut, Bounces = 1, Bounciness = 3 };
public static readonly EasingFunctionBase Bounce1In = new BounceEase() { EasingMode = EasingMode.EaseIn, Bounces = 1, Bounciness = 3 };

 

I want to move it move it!

Elements don’t need to stay in place. You can move elements around your application by using animations. For example, let’s stay that instead our POP! animation, we want the elements to “rise” up one after another from the bottom of the screen and “settle”. Nothing easier! We will use the CanvasMoveManagedAnimation class to do that!

private async Task RiseToTheOccasionAsync(double top)
{
foreach (var item in canvas.Children.OfType<FrameworkElement>())
{
Canvas.SetTop(item, canvas.ActualHeight);
}

TimeSpan duration = TimeSpan.FromSeconds(1);
List<Task> tasks = new List<Task>();
foreach (var item in canvas.Children.OfType<FrameworkElement>())
{
tasks.Add(new CanvasMoveManagedAnimation(Canvas.GetLeft(item), top, duration, ManagedAnimation.Elastic1Out, ManagedAnimation.Elastic1Out).Activate(item));
await Task.Delay(300);
}

await Task.WhenAll(tasks.ToArray());
}

As you can see, the gist of it is very similar to our previous examples.  However, we are using the CanvasMoveManagedAnimation class instead and are telling the element where to go to.

Note that if, during that animation, we were to tell the element to go somewhere else by using the CanvasMoveManagedAnimation class, it would stop where it was at the time of the call, and animate to the new location. As a developer, you need not worry about what it’s doing now in most cases.

Why did you use Canvas.SetTop/Left and not Translation?

This is a matter of preference – since I placed the elements on a canvas, it’s very easy to go and use absolute positioning to play with them. I could just as easily used translations.

One interesting bit about using Canvas positioning to animate your elements around (where applicable of course) is that you can then animate a translation on top of it to make a combined movement.

Other animations supported

Here are the interesting animations present in the library:

  • TranslateTransformManagedAnimation: Does translation animation (as just discussed).
  • SizeManagedAnimation: Animates the Width and Height of an element.
  • ProjectionPropertyManagedAnimation: Animates the various projection properties.
  • FillColorManagedAnimation: Animates the Color property of the Brush applied to a shape.
  • CompositeTransformManagedAnimation: Animates any of the properties of the CompositeTransform on an element.
  •  

    The fun part is that adding your own managed animation classes is very easy. Each one is a couple of lines of code – mostly boiler plate for storing constructor arguments and creating a storyboard.

    The library also contains a couple of other classes for more advanced animations. Specifically, the CanvasCircleMoveManagedAnimation, CanvasSpiralMoveManagedAnimation and CircleGenerator can be used to generate complex movement animations on a canvas (they are needed mostly because WinRT does not support Point path animation).

    What’s next?

    I may do a post in the near future about the more advanced classes. I may also do one on more advanced interruptible interactions that make use of these classes.

    Download

    You can download the full source code here.

    Posted in Animation, Dev, Windows8 | Leave a comment