A Visual Exploration of Two Museum Collections

As a successor to FW4/VIKUS-Viewer this research project on interactive visualization provides access to a selection of objects from two collections of the Berlin State Museums (SMB/SPK) comparing fine art paintings with everyday artifacts.

Datascience as python notebooks on google colab, prototyped on Observablehq and finally programmed in svelte, d3 and pixijs.
Published as Open Source on GitHub together with the working repo on GitHub. I hope I will come around to make a little documentation this time.

A UCLAB project with Mark-Jan Bludau, Viktoria Brüggemann & Marian Dörk.

Visuelle Erkundung der Aktivitäten der Finanzlobby

Visualizations of the lobby report from Finanzwende. The Data was gathered from 34 meetings of the Bundestag Finance Committee (2014-2020) as well as the 33 drafts of speakers related to the financial market (2014-2020). Goal was to show which associations and companies have submitted comments and statements.

Data visualizations created with d3 and observablehq as an embed. In cooperation with Lorenz Matzat for Finanzwende.de

Layerage

Layerage lets you create collages of r/Layer which are similar to each other. r/Layer was a community canvas for creative exploration open to everyone to create and contribute similar to r/place on Reddit.

I was so intrigued by the creativity and granularity of the layers that I had to experiment on it. Initially I built a tool to explore all those layers by zooming in and out but then switched to fiddling on this tool which could be more accessible for everyone. More Info.

Schaufenster

Based on my work of large image collections I created an interactive presentation website and backend for my former university and its design department.

It is a combination of an endless scrollable and zoomable canvas which invites the viewer to stumble upon projects and portraits. Even though the website is generated dynamically it still can be navigated through the browser history.
In cooperation with Boris Müller and Franziska Morlok.

 

VIKUS Viewer

VIKUS Viewer is a web-based visualization system that arranges thousands of cultural artifacts on a dynamic canvas and supports the exploration of thematic and temporal patterns of large collections, while providing rapid access to high-resolution imagery.

Cultural collections feature three fundamental facets that are essential to make sense of their contents: time, themes, and texture. The time an artifact was created provides essential context to make sense of the themes it depicts and its visual texture. The VIKUS Viewer offers an interactive environment to explore a wide range of cultural collections along these three aspects.

The VIKUS Viewer software is based on the code behind Past Visions, a collaborative effort by Katrin Glinka, Marian Dörk and me. The goal was to transform the functionality of the prototype into a tool which can be used on other collections.

This project was carried out in cooperation with the Urban Complexity Lab, the Prussian Palaces and Gardens Foundation (SPSG), the Research Center Sanssouci (RECS), and Forschungsverbund Marbach Weimar Wolfenbüttel (MWW).

Currently it is deployed on the following collections: FW4, Goethes Ausleihen, RECS Flugschriften, Sammlung Burggrafen, Van Gogh, Art Of The March

HARVEST Visualization

HARVEST by Julian Oliver was commissioned by the Konstmuseet i Skövde an exhibition of which was designed and launched on the 14th of September, 2017, running for two months in the museum.

The exhibition comprises a live feed directly from the miner, conveying data relevant to the mining process. I was in charge of the the data visualization, which can be seen in the two projections in the exhibition.

cf. city flows

cf. city flows is a comparative visualization environment of urban bike mobility designed to help citizens casually analyze three bike-sharing systems in the context of a public exhibition space.

Three high-resolution screens show the space of flows of New York City, Berlin, and London through visualizing bike journeys. With our visualizations we want to understand the pulse of urban mobility, and create portraits of a city defined by its transient dynamics.

Together with Till Nagel & Marian Dörk, Client: Urban Complexity Lab, FH Potsdam

In a separate study I tried to push the boundaries of data visualization for urban mobility and designed a set of tools for space-time geography of flow data. Still in an early state these experiments demonstrate a new approach to visualize origin/destination movement by using Edge Bundling in 3d. Screenshots

Exhibition
Streams and Traces 5-11 Nov 2015, Berlin, Germany
IEEE VIS 2016 Arts Program 23-28 Oct 2016, Baltimore, USA

Paper
Till Nagel, Christopher Pietsch, and Marian Dörk. Staged Analysis: From Evocative to Comparative Visualizations of Urban Mobility. In Proceedings of the IEEE VIS Arts Program (VISAP)

Media
Bike share mapping creates beautiful portraits of London, NYC and Berlin The Guardian
Processing.org Exhibition
Bike-share trips become mesmerizing droplets of light in these animated maps VOX

Übermorgen

Expedition ÜberMorgen (Expedition BeyondTomorrow) is a  platform centered on the ideals of constructive journalism and data literacy. The main question in focus is: “Which options does each individual have to support the sustainable development goals?”

We created an interactive data-driven globe which features the opportunity to compare your behavior with global statistics.  It dives deeply into the sustainable development goals‘ statistical basis, so hard-to-grasp data can be easily interpreted.

The platform is highly customizable through a JSON data structure, which is used to publish one sustainable development goal every month. This project compiles both interactive features and reports, neatly arranged and meaningfully interconnected.

For and with Lokaler (Lorenz Matzat) & Spiegel Online (Christina Elmer, Anna Behrend).

The project was funded by journalism grants and is published on spiegel’s project page (english/german)

ubermorgen

VIKUS: Past Visions

This project is part of the research project »Visualizing Cultural Collections« (Visualisierung kultureller Sammlungen) short VIKUS and investigates graphical user interfaces and the potentials inherent to visual exploration of digitized cultural collections.

Frederick William IV of Prussia (1795 – 1861) left a collection of drawings behind. They bear witness to historical events such as wars and revolutions, literary influences or personal obsessions with the devil. Numerous sheets reveal the planning eye of the King in the form of architectural visions and dreamy drafts. So far, 1492 sheets of drawings penned by the King have been fully accessed.
PAST VISIONS

Bringing a ZUI to the browser, handling 1500 images in HiRes, having a fluid interaction flow @60fps, staying consistent in look & feel and having a semantic timeline gave me quite a headache. Most of the interface elements, concepts & techniques you see are the the peak of countless iterations. Saying that, there a lot more features which are not included yet.
But in general this project represents an other experiment on pushing the perception of what the web and information access in general can look like. You can have a look at some screenshots to grasp a bit of the process and how this prototype formed itself.

Together with and for Urban Complexity Lab (Marian Dörk & Katrin Glinka).
Dedicated to Jef Raskin – father of all Zooming User Interfaces

Paper: Katrin Glinka, Christopher Pietsch, Carsten Dilba, and Marian Dörk. Linking Structure, Texture and Context in a Visualization of Historical Drawings by Frederick William IV (1795-1861). International Journal for Digital Art History, No 2, 2016.

Deep Sweep Visualization

Visualization of the data, plotting GPS trace, sensor and RF logs of The Deep Sweep an High-altitude Signal Research Project by Julian Oliver, Bengt Sjölen and Danja Vasiliev.

Beyond Perception

Tap into the world of Brain Computer Interfaces

We reinvented the Mindmachine „a device using pulsing rhythmic sound and flashing light to alter the frequency of the user’s brainwaves“ and combined it with an EEG in order to create a sensory feedback loop.

Beyond Perception is my Bachelor Thesis I did together with Luis Grass to finally end my study of Interface Design at the University of Applied Science in Potsdam.

7648226824475217672

IMG_4162 - Arbeitskopie 2

IMG_4081 IMG_4161

18897643950_e898b6c535_o

DDB Visualisiert

The German Digital Library (Deutsche Digitale Bibliothek) provides access to a multitude of digitized artefacts of the cultural heritatage aggregated from many German cultural and scientific institutions.

To get glimpse of what is hidden is this large cultural treasure I created the overview visualization along the time periods and cultural heritage sectors. Selecting a time span shows the most common keywords, places, persons and organizations for these periods.

In contrast to existing search interfaces, which display the few objects in ordinary result lists, the visualization represent the distribution of many objects along time, place, topic, people, and other dimensions.timeline

Client: Urban Complexity Lab, Design Process: Screenshots, Scientific supervision: Prof. Dr. Marian Dörk, Consulting: Stephan Bartholmei, Fellow visualizations: Gabriel Credico (Networks), Christian Bernhardt (Keywords/Places)

NZZ Wandelhalle

For my most recent data visualization on lobbyism for the swiss newspaper NZZ I dug deep into the data driven document library d3.js to create a playful & intuitive interface. The project formed itself through many iterations and marks a creative learning process in data aggregation, handling and visualization. NZZ Article

Inspired by the genius Mike Bostock and his code snippets I started the visualization in a radial manner, then redefined and reinterpreted the visual language. Experiments like the transformation into a different graph view showed me that there is still potential in this visualization . If you are interested in the visual process you can browse through my flickr stream.

NZZ

 

Client: OpenDataCity (), NZZ (Sylke Grunewald)

Meteorite media impact

This visualization explores the media impact of meteorites on the web vs the actual mass. We gathered data from Google, Bing, Twitter, Flickr and Youtube to determine the popularity of each meteorite and show their virtual depiction.

Collecting the data was the biggest task mainly because we wanted to cover a wide spectrum of internet media. With various tools like CasperJS, NodeJS, PhantomJS we crawled Topsy´s Otter API, Twitter API, Flickr API, YouTube Api, Google Results and Bing Results and fused it in a MongoDB.

Selecting a meteorite shows you a detailed impact on the web with related Tweets, Flickr Photos or Youtube Videos. More on visualizing.org, the german documentation, together with Luis Grass.

media impact

 

chelyabinsk campodelcielo

GLITXT

Glitxt was kickstarted during Art Day Hack 2013 at LEAP Berlin. Related to the theme „going dark“, we built a tool to encode messages into a picture in order to conceal the text, but also make it obvious to the human eye that the image is hiding something. Together with Paul Vollmer and Tim Pulver.

 

gltxt

Liquidata

LiquiData is a tangible visualization to explore your personal movement profile and to share engaging places with others by adding photos and comments with the help of your smartphone. It is a social system that allows people to discover new spots in unknown surroundings e.g. during a city visit. It offers you the possibility to better understand your movements through an urban environment and lets you compare your mental map with the reality.

The prototype was developed for object-trackable tabletops like the Microsoft Surface. The programming environment Processing was used for the entire application. The aesthetic and behavior of the liquid was done with the libraries GLGraphics and toxiclibs. The multitouch gestures are based on TUIO. Map data provided by OSM, tiles generated by TileMill, and the interaction functionality by Unfolding Library.

The Project was initiated in a course by Till Nagel @ University of Applied Science Potsdam back in 2011 in collaboration with Gunnar Friedrich, Pierre La Baume, David Ikuye and Luis Grass. Since then my fellow Gunnar and I taken the idea to an running prototype which could transport the message we had intended. On the one hand there was the problem of performance and on the other hand the troubles with the outdated hard/software of the surface. For instance some GLSL Shader to generate an fluid-like surface were running excellent on my machine but the ATI Radeon of the Surface had an unfixed driver issue. In the end we decided to generate a geo-location driven particle engine wrapped by an isocontour which is similar to the SearchGeometry Library. Check out some parts of the process.

liquidata2cmyk

Logbook

As part of our class „Connect to Science“ at the University of Applied Sciences Potsdam, we developed a concept for a digital logbook that keeps track of everything that visitors find interesting while exploring the exhibition space.

In contrast to other solutions, it doesn’t try to overwhelm users with a catalogue of everything. Instead the app offers nothing but an empty logbook that can be filled with stamps. As we believe that revisiting the experience is as important as the actual hours on location, the logbook also helps to memorize the visit at home.

Check out the logbook microsite, together with Florian Schulz and Wenke Kramp

We programmed an prototype App for the iPhone and designed custom physical stamps, which work like conductive markers. Our prototype setup consists of a wooden stamp that can be recognized by our web app that runs on an smartphone.

On the hardware-side, we drilled a hole through the stamp, attached three soft conductive pads to the bottom and connected them to a copper wire that ends at the top of the stamp. As the stamp touches the screen of the iPhone, it recognizes a pattern of touch points that relates to the specific stamp. Variations are endless, considering 5 touch points that can be placed at various distances from each other and thus forming unique patterns.

Schema-01Phones (1)

 

Interview by Jessica Patterson

The app and interaction design sprung from their research into Smart Guides that would enhance a visitor’s experience in a museum exhibition. They based the content and exploration on „the past Science Tunnel by the Max-Planck-Gesellschaft. This exhibition was on tour worldwide and presented current knowledge about the world today, both in fields of research and application.“

To create an application that was both informative and joyful to use, they pursued the concept for a digital logbook that can keep records of any items visitors find interesting while exploring the science exhibition. Before beginning the tour, users can check out a preconfigured phone or download the app on their own phone. When visitors explore the exhibit they are presented with stamps at various stations. By pressing the stamp on their phone, they record an image in their logbook and unlock information they can reference immediately and in the future.

And, although their concept app design is wrapped around a specific exhibition, „the interaction of using stamps to collect interesting findings is a general idea that would work in other contexts too.“

We thought the project and concept was an interesting take on physical location and digital information integration. The team was gracious enough to answer some of our questions and give us some deeper insight into their process, design and the final UI experience.

How does the initial experience begin? Could users potentially download an app to participate, or would they have to use borrowed phones?

Users could either borrow a device (iPod / iPhone) or bring their own device to access the app. The big benefit of our concept is that it doesn’t require additional hardware on the device to check in with exhibits. Solutions with RFID are either bound to borrowed devices with RFID readers attached, or require WIFI when RFID markers are attached to the device and the exhibit includes the reader. This makes it more expensive in terms of technology and other collateral. Although one could argue that QR-Codes would be even cheaper, they require more time and attention of the user to actually make the photo of the QR-Code. As we don’t really like this way of interaction, we’ve never put them into consideration.

 

Could you explain how the stamps work technically? Will they physically reflect the digital stamp designs as rubberstamps would normally look?

Our prototype setup consists of a wooden stamp that can be recognized by our iPhone app. On the hardware-side, we drilled a hole through the stamp, attached five soft conductive pads to the bottom and connected them to a wire that ends at the top of the stamp. As the stamp touches the screen of the iPhone, it recognizes a pattern of touch points that relates to the specific stamp. As we can vary the distance, as well as the number of touch points, the assortment of possible stamps is endless. For a final production, we imagine using 3D printed rubber stamps that would have individual designs on the bottom with integrated conductive markers. The important part is to make sure that the hand is touching conductive material that connects to the markers.

Can you tell us a bit the Logbook team? What are you up to, and what is your background in design?

We are three interaction design students in our 4th year at the University of Applied Sciences in Potsdam, Germany. We are focussing on research and experimentation to find new ways of interaction aside from mouse and keyboard. As humans have never spent more time with computers and technology in their daily lives than the present, we use design to solve problems and help them make their lives more enjoyable. Therefore we use whatever we need to achieve our goals: graphic design, code and open source tools.

What sort of process did you use to collaborate on and develop the idea?

As our professor, Matthias Krohn, introduced us to the topic of Smart Guides, we started with research of existing solutions. Almost every big museum or exhibition offers apps for three reasons: content, navigation and participation. Most of them do a good job in providing additional media such as articles, images, videos and integrating maps of the exhibition space. But sometimes, the amount of information is overwhelming and requires deep-level navigation.

In contrast, the Logbook doesn’t try to overwhelm users with a catalogue of everything inside the exhibit. Instead the app offers nothing but an empty logbook that can be filled with stamps—like a passport, keeping track of everywhere you have traveled. As we believe that revisiting an experience is as important as the actual hours on location, the logbook does just this—helping to memorize each moment so you can relive it on another day.

Moreover, nearly all of the existing apps replace former audio guides by copying the well-known procedure of entering 3 or 4 digit numbers to access the information. But at times we tend to get a little bit frustrated when we fail to enter our smartphone pin correctly; we thought that there must a better way to connect an app with an exhibit.

And as we all love to play with tangible interfaces, we thought that it would make so much sense to connect with a physical exhibit through a tangible interface. It was a naive idea when Wenke wanted to put objects on the iPhone to interact with. We all knew that this was possible on multitouch tabletops that work with cameras to detect fingers and fiducials (special patterns or markers that can be attached to objects to be detected by the tabletop). But how could this work on an iOS device that has a capacitive touchscreen? As we knew that iOS could recognize multiple fingers, we translated the concept of fiducials and brought up our own patterns that would require multiple touch points in defined distances to each other. The limitation of not being able to recognize real objects lead us to this workaround.

Since this project puts an emphasis on user interaction with physical objects to transmit information. Is there any play here on nostalgia, novelty or some other desire to maintain humanity in an increasingly digital world?

Definitely. When we think of stamps we think about the time when we were young. We could collect stamps, tint them and start to paint the world around us. And even as grownups, we are surrounded by stamps. We collect stamps at the airport when we travel to different countries, we receive stamps when we go to an event; and on the Way of St. James, people can prove their achievement when the showcase their stamps that they collected on their way. Especially as everything becomes digital, we need to think more and more how we can keep the connection to our physical surroundings. We think that using physical objects helps memorizing and learning as it integrates feelings and emotions. Using pen, paper and—in our case—stamps, we can help to increase the experience and anchor it in our minds.

Logbook has some great UI details, including the illustrated stamps. How did the team decide on the look and feel of the app? What became important to include or exclude?

The UI is related to the physical experience that we wanted to build. Therefore we integrated subtle textures to help the illusion of stamping onto paper and having snippets of paper in a logbook. Wenke designed the stamps and illustrated a storyboard that was an enormous helper in presenting the concept to others. It was great to have someone with these skills on the team that would bring us a step further. Usually we have experts in programming or information visualization and are able to show great screen designs. But this time it was also important to deliver something that was handcrafted and emotional.

On the information architecture side, considering the broad spectrum of users, from kids to seniors, we felt the need for a flat information architecture and came up with three views: Collection, Review and Map. With successful apps such as Path, we were inspired by their navigation and re-interpreted it to fit our needs. Users can swipe left and right to switch between the different sections of the Logbook. To collect something new, you simply orient the device to landscape mode and stamp on the blank canvas. Your Logbook will then be packed with articles, photos and videos waiting to be reviewed instantly while you are on the go or relaxing comfortably at home.

 

Can you see the Logbook UX used for other applications?

We don’t think that it works for daily life apps, but everything that is related to a local context might make use of physical tokens.

What is your favorite aspect of the Logbook project?

The feeling of pressing the stamp on the display and seeing the representation of the stamp is incredible. We tested this with friends and colleagues, and more than once we’ve seen people pressing the stamp a dozen times onto the display for fun. The soft bottom allows to actually press the stamp down a little which feels really good. People were surprised that it actually works, and it was a great experience for us to see people smiling and being happy when using an iPhone that usually results in mean and emotionless faces.

Did you face any specific challenges in developing the idea? Would you foresee any challenges if you went to production?

As the physical prototype was only part of the whole project, we were not able to build the stamp that has an actual image with markers at the bottom. Therefore we have a proof of concept that technically works, but it would need more product design to be used in a real world context. We may need to find a material that is conductive and can be 3D-printed.

Where do you find inspiration for design projects in general?
Other university projects, our dreams and the web (hackaday.com, engadget.com, behance.com, theverge.com, creativeapplications.net, littlebigdetails.com).

Are there any specific designers / professionals you look to for guidance or inspiration?

No, as our field is so young, we try to invent our own route and not look too much to what has already been done. Of course it is important to be on top of time and know what has been achieved and what new technology is available. But the most important lesson that school has taught us is that whatever idea we have, it is possible to make it happen. All the tools are out there, and if they’re not we can build our own.

Brain State Sharing

This project was developed in the course „Messing with our Minds“ and is an experimental approach on the topic of Brain Computer Interfaces and Realtime Neuroscience. At the beginning of the course we were confronted with the possibility of having realtime activity data of someones brain. The workflow was like this: EEG recording device -> Mathlabs -> Processing. So what could we do with this?
We decided to investigate the topic of Brainmachines cause it seemed to be just the perfect match. After some Research our Question was: Is it possible to transfer someones mental state through technology?

What came out of this, was a working prototype with astonishing effects on the probands. The project was presented at the „Lange Nacht der Wissenschaft“ @ Language of Emotions FU Berlin and various Festivals. More information on the project website. A project together with Luis Grass and Michael Härtel.

(mehr …)

Border Bumping

Border Bumping uses people’s perception of geo-localization (based on cellphone data) as a means of distorting national borders. National borders are a construct, usually represented by a line on a map or demarcated by a sign, that signifies your movement from one place to another and more specifically, from country to country or state to state. Most of the time you can’t see or feel that you are crossing a border, rather you seamlessly move across space like when traveling on a train or by plane. However, these transitions can become evident if you have a cellphone that notifies you that you’re roaming or out of service.

Using these cellphone based notifications, Julian Oliver created this project, Border Bumping, on which I proudly worked with Till Nagel on the visualization side. Featured on the creators project.

Till wrote a nice a documentation on our Design and Programming Progress which describes well our process.

It was a great experience to work with a highly motivated team, between beers, loads of coffee, berlin hack style and night sessions.

To sum it up, i give you a quick look in to the technology:

  • TileMill to make a toner like map style and to have it deployed to the caravan server
  • OpenSteetMap to get the countries, streets, city,.. data
  • GeoJson to hand over the interactive border data
  • Leaflet to have a performant map with support multitouch and svg rendering
  • D3 to make animation on the svg generated by leaflet and to read in the bumping data
  • jQuery/zeptoJS to make the lightweight UI o the side
  • Paper to prototype ideas and talk about it

By the way, the whole code is opensource on GitHub.

Bildschirmfoto 2012-08-15 um 13.23.10

 

Fillip

Fillip ist ein illuminierte Ball der zur Stimulation an Demenz erkrankter Menschen entwickelt wurde.
Er bewegt sich selbstständig auf unregelmäßigen Bahnen und provoziert so spielerische Reaktionen. Ein Kamera-System erkennt die Spieler, die Tisch­kanten und die aktuelle Position des Balls. Dies macht es möglich, den Ball-Stoß ferngesteuert zu ver­stärken, in seiner Richtung zu unterstützen, Personen gezielt anzufahren, um sie ins Spiel zu integrieren, ihn vor dem Absturz von einer Tisch­­platte zu schützen oder aber den Ball wieder zu­ beleben, wenn er liegen geblieben ist.
Fillip ist auf diese Weise in der Lage, Aufmerksamkeit und Aktivität positiv zu stimulieren und Freude am Spiel zu vermitteln. Er soll dann mit geringer Beteiligung von Pflegepersonal eingesetzt werden.

Entwickelt zusammen mit Jeremias Volker, mehr Informationen auf zeigma.com/fillip, Fillip auf der Biennale, Fillip auf der aveneo – Nürnburger Messe

Fillip-3