Wednesday, December 9, 2009

*rant* - Microsoft, why do you do this to me?

(Recent discussions of the Windows platform have led me to disclose my opinion of a particular infelicity of ASP.Net)

Background: a few years ago, my day job consisted primarily of web development using ASP.Net v.2 and C#. Overall I quite enjoyed the experience. C# is IMHO a pretty decent language, and .NET is alright too. Everything works perfectly smoothly, until you travel off the beaten path.

If Perl's maxim is "There's more than one way to do it," then ASP.Net's is "There's more than one way to do it, but the likelihood of any given way working properly is inversely proportional to the number of ways to do it."

I had an interface, let's call it IHasData. This interface provided a function to pretty-print information associated with different types of events for a web calendar. So far so good. I wanted to display several different events in a GridView. "Simple," I said to myself, "just put them in a generic collection and assign the data source." So I did that, with code pretty much like this (apologies if the syntax is off; I haven't done C# for years):

GridView gv = new GridView();
ListSet events = new ListSet();

//code to fill the events set omitted.

gv.DataSource = events;

and everything should work like a charm. Except, now my page sometimes threw an exception.

It was an odd exception too, indicating that some particular IHasData instance didn't have an unrelated method (i.e. not exposed by IHasData) that wasn't supposed to be in that class anyway. My code was only using IHasData methods; the types ensured that nothing else could be accessed. So why was .Net trying to load something unrelated?

The actual problem was pretty clear. .Net was using reflection on the first element of the data source, and assuming that all remaining elements would have exactly the same type. .Net's reflection facilities are quite powerful, and once they're in play all members of a class are available, regardless of the type used to access it. Not necessarily a good idea for a generic list on interfaces. I have no idea why the GridView has this behavior, but it makes me sad. Particularly since it could have been done right and been that much more useful.

Ultimately this was only one of a host of problems that stemmed from trying to use NHibernate, persistent objects, and .Net built-in features like the ObjectDataSource. It would have been really slick if it worked as advertised, but there were some significant design flaws and the implementation was half-baked at best. I did eventually create a framework that filled most of the gaps, but I was making up for functionality the framework was supposed to provide and didn't. Not good.

Sunday, November 15, 2009

iteratee status

I haven't posted for a while, so I wanted to provide a small update. I've been working on the following updates:

1) Providing integrated pure and monadic iteratees. I've just implemented an iteratee class, but I don't yet know how performance compares to the original version. I'm also considering a GADT version, but I suspect it'll look ugly.

2) Exception and error handling. Haven't done much work, but I've been thinking about this and have some ideas to implement. Strings are not sufficient.

3) Reorganizing the modules. Most of this is done, although to some extent the final version will depend upon results of item 1.

4) Performance. I've been developing a benchmark suite utilizing Criterion. The ultimate goal is to use this suite to compare performance of multiple versions of the library.

5) Parsec integration. This resulted from a user request. It's useful enough that I'll probably integrate it into the main distribution.

Also, there's a trac instance available.

Wednesday, August 12, 2009

Restricted-element containers and maps

Given the perspective we have now, I think it's a little hard to imagine how ground-breaking the concept of type classes were when they were first introduced in Haskell. I think it says a lot that, after the Haskell Committee decided upon their inclusion, they made type classes an integral part of the language. Lest you think that's an exaggeration, can you imagine what Haskell would like like today without, e.g. Num, Functor, or Monad?

As a consequence of using a new idea, some areas that Haskell (and Haskellers!) currently struggle with are related to type classes. One area in particular are containers and type classes. A "Holy Grail" of sorts for Haskellers is the creation of a container type class that provides all the useful container functions. Ideally, all useful container types would be able to be instances of this class.

One package that provides this sort of functionality is John Goerzon's excellent ListLike package, http://hackage.haskell.org/package/ListLike.

ListLike provides (in addition to many others), the functions "map" and "rigidMap". Why two? The difference is in the types:

> class ListLike full item | full -> item where
> map :: ListLike full' item' => (item -> item') -> full -> full'
> rigidMap :: (item -> item) -> full -> full

The standard map (and Functor's fmap) allow mapping from a container of any element type to a container of any other element type. rigidMap requires that both the input and output elements and containers have the same type. This is very useful for monomorphic containers. If you provide an instance for ByteString Word8, you can't use the native 'map' because it only maps over elements of the same type since Word8 is the only valid element type. ListLike's default 'map' function builds a new container by destructing and reconstructing the old one. This is highly inefficient compared to native maps, but is the only way to write the function that satisfies the type signature. Even though "ListLike ByteString Word8" is the only instance that can exist, there's no way to prove it to the compiler. 'rigidMap', by further restricting the types, can use the native map, and is therefore much more efficient when it can be used.

Unfortunately there is another case, one that I expect will become more common as certain high-performance libraries become more widespread. Many interesting containers are polymorphic but require class contexts on their elements. Examples are the UVector and StorableVector packages, which require elements to be instances of UA and Storable respectively.

Here's the problem. Using StorableVector as an example, these packages often provide the following map:

> map :: (Storable item, Storable item') => (item -> item') -> Vector item -> Vector item'

How can we use this in ListLike? It could certainly be used directly for 'rigidMap', but what about 'map'? Unfortunately that isn't possible because of the class restriction on the output element type. The type variable for that element isn't mentioned in the class definition (i.e. it doesn't appear in the instance head). Since it's not in the instance head, it isn't possible to put a class context on it. If you fall back to the default 'map' instance any possible efficiency gains are lost.

One solution is to include the output item type in the class definition, but that would require users to put an extra type variable in every time the class context is used, which is a waste for something that is only used for one function.

I think the correct approach is to create a new type class, call it LooseMap, as follows:


> class (ListLike full item, ListLike full' item') => LooseMap full item full' item' where
> looseMap :: (item -> item') -> full -> full'
>
> --StorableVector instance
> instance (Storable el, Storable el') => LooseMap (Vector el) el (Vector el') el' where
> looseMap = Data.StorableVector.map

Of course completely polymorphic containers (e.g. List) could easily be made instances of this class as well. I would think that "map" should be completely supplanted by "looseMap", although rigidMap would still be necessary for monomorphic types.

As a side note, there is another possible solution. I experimented with using type families to encode Functors (and thus a polymorphic map) as RFunctors, using the same approach as in the RMonad package. This worked fine for map, but I couldn't figure out how to write a fold that satisfied the typechecker. I think it's impossible, and would love to hear from anyone who knows how to make it work.

Sunday, April 19, 2009

SEAMUS recap

The 2009 SEAMUS Conference is over, and the finale featured the coolest thing ever. A video of an earlier concert is on Youtube already, of course (I didn't see the most recent concert posted yet).

http://www.youtube.com/watch?v=aOZEpP_zzaw&feature=related

Thursday, March 12, 2009

future directions for iteratee

I have recently completed two major projects that were taking up nearly all of my time. Now that they are done, I'd like to work on the next version of the iteratee package. Here are a few ideas that I hope to include soon.

1. Remove the explicit Maybes
This was suggested by Oleg, and it seems to be a good idea. Every iteratee that produces a value other than () uses Maybe to represent the possibility of failure (always an option when doing IO). By incorparating the Maybe into the type of IterateeG, these will no longer need to be explicit. This will also make IterateeGM an instance of MonadZero to the extent that Maybe is an instance of that class.

Status: I've made a patch that does this, but it doesn't yet work properly with convStream. I haven't managed to track down the problem. Likely to be included, assuming I can solve this issue in a reasonable time frame.

2. Stream type class (and simplify StreamChunk)
I keep hoping for a ListLike type class to enter common use. Barring that, I have some ideas for breaking up the necessary functions of StreamChunk into separate type classes, such as the patch to use a monoid instance submitted by Bas van Dijk.

Status: Changes will be made, it is likely that StreamChunk will be broken into multiple smaller classes. Any addition of a Stream type class will wait until after point 4 is resolved.

3. More utility iteratees (foldl, filter, others?)
Status: Likely to be included. Changes to StreamChunk will make these easier to support.

4. Type-safe seeking
If iteratees are parameterized by Stream, the type of the stream should indicate if seeking is supported. I have an outline for how to implement this, but haven't done any work yet.

Status: Needs research, this will probably wait for the next major version bump.

5. Improved error handling
Bas van Dijk submitted a patch to change the type of a stream error from String to Error a. Others have suggested other possible changes as well.

Status: Needs more research, this will likely wait for the next major version bump.

enumerator/iteratees and output

I have recently received a few questions about writing output while using enumerator-based I/O. In some cases users have attempted to make enumerators (like enumFd) that will handle output as well, but have difficulty actually making it work.

I think these problems stem from an incorrect application of the enumerator model of I/O. When using enumerators, a file (or other data source) is a resource that can be enumerated over to process data, exactly as a list can be enumerated over in order to access the data contained in the list. Compare the following:

foldl f init xs

enumFd "SomeFile" ==<< stream2list

In the first function, 'xs' is the data to be processed, 'foldl' tells how to access individual items in the data collection, and 'f' and 'init' do the actual processing. In the second, "SomeFile" is the data, 'enumFd' tells how to access the data, and 'stream2list' does the processing. So how does writing fit in? The output file obviously isn't the data source, and it doesn't make sense to enumerate over your output file as there's no data there to process. So it must go within the Iteratee. It turns out that making an iteratee to write data is relatively simple:

> import Data.Iteratee
> import System.IO
> import Control.Monad
>
> writeOut :: FilePath -> IterateeGM [] Char IO ()
> writeOut file = do
> h <- liftIO $ openFile file WriteMode
> loop h
> where
> loop :: Handle -> IterateeGM [] Char IO ()
> loop h = do
> next <- Data.Iteratee.head
> case next of
> Just c -> liftIO $ hPutChar h c >> loop
> Nothing -> liftIO $ hClose h


Add some error handling and you've got a writer. This version could be polymorphic over different StreamChunk instances by generalizing the type (FlexibleContexts may be required as well). Other stream-specific versions could be written that would take advantage of the specific StreamChunk instance (e.g. using Data.ByteString.hPut instead of hPutChar).

I hope this will serve as a very basic introduction to output when using enumerators. In addition to a generic writer like this, it may frequently be beneficial to define special-purpose writers. In a future post I will show a writer that seeks within the output file using a threaded State monad.

Tuesday, February 3, 2009

Build a better WAVE reader, pt 2

In the last post, I looked at the beginnings of using an Iteratee to create a WAVE file reader. I achieved a runtime of 25 seconds, compared to 1.9 seconds for my benchmark code.

Oleg's Iteratee code uses a Stream type which internally represents data as Chunk [a]. Haskell lists have many great properties, but they are not particularly efficient for this type of numerical processing. There are many better options, including Arrays, UVectors, and StorableVectors.

Unfortunately all of these options I would want to use introduce a type class restriction on types of elements in the array. This isn't a problem for sound files in particular, although it does greatly limit the utility of the Iteratees in general. One of the Iteratee examples shows how Iteratees can be layered for text processing, using IterateeGM String m a iteratees layered on IterateeGM Char m a iteratees to provide operations on words in a text stream. This is no longer possible, as String is not an instance of the type classes necessary to use one of these more compact data structures. In the end, the performance benefits should be worth it.

I chose StorableVector to replace the basic list, because it seemed handy while I was experimenting. StorableVector turns out to have another good property which is key to achieving the goal of performance close to C.

Adapting the Iteratees to use StorableVector is straightforward. Unfortunately when we've finished, performance is actually worse! In fact, it's as bad as the original RBIO-based version. It turns out that one key function is the culprit: conv_stream.

conv_stream converts between two streams. It is capable of converting multiple elements from one stream to one (or no) elements of the other, using a user-supplied conversion function. The wave file reader uses conv_stream to convert from a stream of Word8's to a stream of Doubles so the user can operate on Doubles, with the library handling all conversions transparently. The original type of conv_stream is:
conv_stream :: Monad m =>
IterateeGM el m (Maybe [el']) -> EnumeratorN el el' m a
Unfortunately, this means our data needs to unpacked from the StorableVector Word8 into separate Word8's, those Word8's assembled into Just [Double], and finally packed into a StorableVector Double for the next Iteratee. This is very inefficient, but easily fixed. Just change the type of conv_stream to
conv_stream :: (Storable el, Storable el', Monad m) =>
IterateeGM el m (Maybe (Vec.Vector el')) -> EnumeratorN el el' m a

and we're done. In fact, the implementation doesn't change at all from the original, just the type. Now no packing or unpacking need take place.

With the new conv_stream, it's necessary to change the conversion function. This is the IterateeGM which converts Word8's into Doubles. I have actually split this code into two parts, a function (called unroller because the original list-based version was a manually-unrolled endian_read) which converts Word8s into Word16 or Word32 as necessary, and a normalization function to normalize the result and convert from Words to Double. The normalization function should be fine as is, all that's necessary to change is the unroller function. Ideally, it would operate on StorableVectors directly. Fortunately this is possible. All that's necessary is to get a ForeignPtr Word8 from a StorableVector, and cast it to ForeignPtr Wordn. A somewhat hackish 16-bit converter can be implemented as follows:
import qualified Foreign.Ptr as FP
import qualified Foreign.ForeignPtr as FFP
import qualified Data.StorableVector as Vec

unroll_16 :: (Monad m) => IterateeGM Word8 m (Maybe (Vec.Vector Word16))
unroll_16 = liftI $ IE_cont step
where
step (Chunk vec)
| Vec.null vec = unroll_16
| Vec.length vec == 1 = liftI $ IE_cont $ step' vec
| Vec.length vec `rem` 2 == 0 = liftI $ IE_done (convert_vec vec) (Chunk $ Vec.empty)
| True = let (h, t) = Vec.splitAt (Vec.length vec - 1) vec
in
liftI $ IE_done (convert_vec h) (Chunk t)
step stream = liftI $ IE_done Nothing stream
step' i (Chunk vec)
| Vec.null vec = liftI $ IE_cont $ step' i
| Vec.length vec `rem` 2 == 1 = let vec' = Vec.append i vec
in
liftI $ IE_done (convert_vec vec') (Chunk $ Vec.empty)
| True = let (h, t) = Vec.splitAt (Vec.length vec - 1) vec
in
liftI $ IE_done (convert_vec $ Vec.append i h) (Chunk t)
step' _i stream = liftI $ IE_done Nothing stream
convert_vec vec = let (fp, off, len) = VB.toForeignPtr vec
f = FP.plusPtr (FFP.unsafeForeignPtrToPtr fp) off
fp' = (FFP.castForeignPtr $ unsafePerformIO $ FFP.newForeignPtr_ f) :: FFP.ForeignPtr Word16
in
Just $ VB.fromForeignPtr fp' (len `div` 2)

and that's it! The big hack is that this only works on little-endian platforms. To be correct, the bytes would need to be swapped on big-endian systems only. Implementing a suitable swap function is currently an excercise for the reader. Note that unsafeForeignPtrToPtr should be safe as there is no finalizer associated with the ForeignPtr, and in any case the memory pointed to won't be GC'd until after the Iteratee is complete.

At this point, the data should be handled entirely as a StorableVector, with no conversions to or from any intermediate lists at any point. Here are results from running the test:
$ time ./test_iter_sb
Just (AudioFormat {numberOfChannels = 2, sampleRate = 44100, bitDepth = 16})
Just 0.977568895535142

real 0m1.661s
user 0m1.335s
sys 0m0.140s
And the proof is in the pudding. This is the fastest Haskell implementation yet, even faster than hsndfile in a lazy stream.

A bit of work remains to be done. It should be cleaned up, with the hack fixed properly. Also I've currently only implemented unroll_16. For 24-bit audio, I expect the creation of a Word24 type and Storable instance will be necessary in order to employ the same technique. However, I think I've proven that a pure-Haskell, high performance audio I/O library is possible.

Monday, February 2, 2009

Build a better WAVE reader, pt 1

I have recently been working on creating an optimal audio I/O library in Haskell. By optimal, I mean the library should do all of the following:
  1. Have a clean, functional, usable interface. Operations should be composable, and the user should not have to deal with recursion, folds or the like. Procedural-style interfaces (anything exposing filehandle-type operations to the user) are out too.
  2. Be space efficient. Some Haskell implementations suffer from space leaks, or can have space leaks if the user isn't careful with data.
  3. Have performance comparable to hsndfile, a Haskell binding to libsndfile.
  4. Be implemented purely in Haskell. This makes it interesting.
The standard for audio I/O performance in Haskell has been hsndfile, as was discussed on haskell-cafe not long ago. Testing shows that using hsndfile to read data in a lazy stream fashion (similar to lazy ByteStrings) is about 10 times faster than the next-fastest implementation. Not good for Haskell's claim of having speed comparable to C! Still, I'm convinced it's possible to do better.

My test case consists of finding the peak normalized amplitude of a 16-bit stereo WAVE file. The audio duration is about 6 minutes, with a total file size of about 66 MB. This task involves reading and processing the entire file. The processing is quite minimal, so the speed of an implementation should depend primarily on the efficiency of reading and normalizing audio data.

Using hsndfile in a lazy stream does provide an interface which is familiar to many Haskell users, but I'm not particularly fond of it. If the user is not careful, the entire file can be retained in memory after processing. For a large audio file, this leads to an unacceptable situation. It should be impossible for data to be retained without explicit action from the user. Also, dependencies on foreign libraries can be difficult for some (i.e. Windows) users to resolve.

So having decided to create a native Haskell implementation, the first place to start is with the semantics and interface. I recently read Oleg K.'s presentations on Iteratee processing in Haskell, and I'm convinced this is the best model for functional I/O. It just feels right, what else can I say? Read the paper, see the code.

A preliminary version of Iteratee-based processing was benchmarked as "Enumerator" in the Haskell-art discussion. It was roughly comparable with other Haskell solutions, but has some problems. Notably it doesn't support seeking within the file, which would be nice. After I developed that, Oleg released a new version of Iteratee which does support seek operations via the RBIO monad. So I dropped Enumerator in favor of the new Iteratee + RBIO.

Oleg helpfully provides a TIFF reader with his Iteratee code. I based my wave reader on that, using the same technique of storing Enumerators for each wave sub-chunk in an IntMap. Since they aren't tagged, a list probably would have served just as well, but since most wave files only have 2 or 3 chunks anyway it doesn't seem like it matters much. Changes here aren't going to have a large performance impact.

The first version to use Iteratee + RBIO executes in about 45 seconds. Ouch.

RBIO uses IORef's to communicate seek requests and answers between Iteratees and Enumerators. Unfortunately IORefs are slow, so I'll start by getting rid of them. The data type
data IterateeG el m a = IE_done a (StreamG el)
| IE_cont (StreamG el -> IterateeGM el m a)

can be changed to
data IterateeG el m a = IE_done a (StreamG el)
| IE_cont (StreamG el -> IterateeGM el m a)
| IE_seek FileOffset (StreamG el -> IterateeGM el m a)

This also necessitates adding one more case to (>>=) on Iteratees:
iter_bind m f = m >>== docase
where
docase (IE_done a (Chunk [])) = f a
docase (IE_done a stream) = f a >>== (\r -> case r of
IE_done x _ -> liftI $ IE_done x stream
IE_cont k -> k stream
iter -> liftI iter)
docase (IE_cont k) = liftI $ IE_cont ((>>= f) . k)
docase (IE_seek off k) = liftI $ IE_seek off ((>>= f) . k)

After these changes, the provided Iteratees can be adapted to use the new structure relatively easily. The enumerator that reads from a file, enum_fd_random, must also be updated to understand seek requests. At this point we can remove RBIO entirely, and our execution time is about 33 sec. Adding a few carefully-chosen INLINE's and SPECIALIZATION's gets runtime down to 25 sec. Better, but we're very far from the goal of being comparable with C.

The next step is to change the Iteratee's internal representation of data.

Introduction

Hi, and welcome. I've decided to start this blog as a way to share some of my findings with various communities I'm involved with. I will try to post when I have something important/interesting to say, so I may not update all that frequently.

My main interests are music and programming. Specifically, modern art music, electroacoustic music, and programming in order to serve some musical purpose. For the moment, my language of choice is Haskell, because it's far more interesting than other languages I know. If Perl is for people who want to Get Things Done, then Haskell is for people who want to Do Things Right, Eventually, After You Figure Out Exactly What That Means. Which suits me just fine.

So again, welcome and thanks for taking a look.