Milania's Bloghttps://www.milania.de/My little place on the web...<![CDATA[Showcase: The correlation coefficient subject to noise]]>https://www.milania.de/showcase/The_correlation_coefficient_subject_to_noisehttps://www.milania.de/showcase/The_correlation_coefficient_subject_to_noiseThu, 19 Apr 2018 00:00:00 +0200Jan SellnerThe correlation coefficient is an important metric to measure the linear dependency between two variables \(X\) and \(Y\). It is defined as

]]><![CDATA[Blog: CSS lightbox without JavaScript realized with a hidden input element]]>https://www.milania.de/blog/CSS_lightbox_without_JavaScript_realized_with_a_hidden_input_elementhttps://www.milania.de/blog/CSS_lightbox_without_JavaScript_realized_with_a_hidden_input_elementThu, 25 Jan 2018 00:00:00 +0100Jan SellnerIf you place images to a layout with a fixed width (like this webpage here), you may encounter the problem that you have images which are too large to display. Hence, the image is only shown in a lower resolution. But when we want the user to still be able to view the image in its full glory, we need an additional way of interaction. One could be to provide a link to the image in its full size but then the user has to leave the current page which breaks the attentional flow. A lightbox is a very nice way to overcome this issue which allows viewing images in higher resolutions without leaving the current site. The image is shown enlarged on the same page as before and the rest of the site is hidden in the background (but still visible) as seen in the following example.

]]><![CDATA[Showcase: Nearest neighbour density estimation]]>https://www.milania.de/showcase/Nearest_neighbour_density_estimationhttps://www.milania.de/showcase/Nearest_neighbour_density_estimationTue, 05 Dec 2017 00:00:00 +0100Jan SellnerDensity estimation based on the nearest neighbours is another technique to estimate the unknown PDF \(\hat{p}(x)\) from observed data. It implements kind of the opposite idea of the Parzen window estimator where we place kernels at each data point with a certain side length \(h\) which determines the local influence of the kernel. Using a large \(h\) results in wide kernels which collect more points on the way. In the nearest neighbour density estimation, we approach from a different perspective. Instead of fixing the side length \(h\) and collecting a varying number of \(k\) neighbours for each kernel, we now fix \(k\) and adjust the influence area accordingly.

]]><![CDATA[Blog: Introduction to kernel density estimation (Parzen window method)]]>https://www.milania.de/blog/Introduction_to_kernel_density_estimation_%28Parzen_window_method%29https://www.milania.de/blog/Introduction_to_kernel_density_estimation_%28Parzen_window_method%29Sun, 12 Nov 2017 00:00:00 +0100Jan SellnerIn probability theory, it is common to work with certain distributions which describe a stochastic process and reveal information of how the process may behave. In reality, it is not unusual to deal only with data directly without any knowledge about any formal distribution. However, even though we don't know the distribution, it is still valid to assume that the data arise from a hidden distribution. This means we acknowledge that there is a distribution which produced our data but we don't have a clue which one it is and there is probably no way to find out for sure. But there are techniques which can estimate a distribution based on the observed data. One is known as kernel density estimation (also known as Parzen window density estimation or Parzen-Rosenblatt window method). This article is dedicated to this techniques and tries to convey the basics to understand it.

]]><![CDATA[Showcase: From the Euclidean distance over standardized variables to the Mahalanobis distance]]>https://www.milania.de/showcase/From_the_Euclidean_distance_over_standardized_variables_to_the_Mahalanobis_distancehttps://www.milania.de/showcase/From_the_Euclidean_distance_over_standardized_variables_to_the_Mahalanobis_distanceSat, 21 Oct 2017 00:00:00 +0200Jan SellnerMeasuring distance is an important task for many applications like preprocessing, clustering or classification of data. In general, the distance between two points can be calculated as

]]><![CDATA[Showcase: Subadditivität als Bedingung für ein natürliches Monopol]]>https://www.milania.de/showcase/Subadditivit%C3%A4t_als_Bedingung_f%C3%BCr_ein_nat%C3%BCrliches_Monopolhttps://www.milania.de/showcase/Subadditivit%C3%A4t_als_Bedingung_f%C3%BCr_ein_nat%C3%BCrliches_MonopolSat, 14 Oct 2017 00:00:00 +0200Jan SellnerEs gibt mehrere Möglichkeiten, wie definiert werden kann, ob ein natürliches Monopol vorliegt oder nicht. Eine davon ist die Verwendung der Subadditivität. Dabei geht man davon aus, dass die Kostenstruktur und damit die Kostenfunktion für die Herstellung des Produktes bekannt sind. Zumindest wenn man mit der Definition auch rechnen möchte. Grundsätzlich besteht das Ziel dabei darin, mit Hilfe der Kostenfunktionen eine bestimmte Menge herzustellen. Die Frage ist nun, ob es günstiger ist, wenn nur ein oder wenn mehrere Unternehmen die gewünschte Menge produzieren.

]]><![CDATA[Blog: Buffer vs. image performance for applying filters to an image pyramid in OpenCL]]>https://www.milania.de/blog/Buffer_vs._image_performance_for_applying_filters_to_an_image_pyramid_in_OpenCLhttps://www.milania.de/blog/Buffer_vs._image_performance_for_applying_filters_to_an_image_pyramid_in_OpenCLWed, 30 Aug 2017 00:00:00 +0200Jan SellnerWhen you have worked with images in OpenCL before, you may have wondered how to store them. Basically, we have the choice between using buffers or image objects. The later seems to be designed for that task, but what are the differences and how do they perform differently when applying a certain task? For a recent project, I needed to decide which storage type to use and I want to share the insights here. This is a follow-up article to my previous post where I already evaluated filter operations in OpenCL.

]]><![CDATA[Blog: Performance evaluation of image convolution with gradient filters in OpenCL]]>https://www.milania.de/blog/Performance_evaluation_of_image_convolution_with_gradient_filters_in_OpenCLhttps://www.milania.de/blog/Performance_evaluation_of_image_convolution_with_gradient_filters_in_OpenCLThu, 17 Aug 2017 00:00:00 +0200Jan SellnerFilter operations are very common in computer vision applications and are often the first operations applied to an image. Blur, sharpen or gradient filters are common examples. Mathematically, the underlying operation is called convolution and already covered in a separate article. The good thing is that filter operations are very well suited for parallel computing and hence can be executed very fast on the GPU. I worked recently on a GPU-project were the filter operations made up the dominant part of the complete application and hence got first priority in the optimization phase of the project. In this article, I want to share some of the results.

]]><![CDATA[Blog: Using the Surface pen as presenter]]>https://www.milania.de/blog/Using_the_Surface_pen_as_presenterhttps://www.milania.de/blog/Using_the_Surface_pen_as_presenterFri, 11 Aug 2017 00:00:00 +0200Jan SellnerIf you have a Surface and already use it for presentations, you may have wondered if it is possible to use the Surface pen to control the slides. The idea is to use the top button of the pen which is connected via Bluetooth with the device. Pressing this button once proceeds one step forward in the presentation and pressing it twice goes one step back. I tried exactly this and here I want to show the solution which worked for me.

]]><![CDATA[Blog: Introduction to the Hessian feature detector for finding blobs in an image]]>https://www.milania.de/blog/Introduction_to_the_Hessian_feature_detector_for_finding_blobs_in_an_imagehttps://www.milania.de/blog/Introduction_to_the_Hessian_feature_detector_for_finding_blobs_in_an_imageSun, 06 Aug 2017 00:00:00 +0200Jan SellnerIn many computer vision applications, it is important to extract special features from the image which are distinctive, unambiguously to locate and occur in different images showing the same scene. There is a complete subbranch in the computer vision field dedicated to this task: feature matching. Usually, this process consists of three tasks: detection, description and matching. I shortly want to summarize the steps but you can easily find further information on the web or in related books^{1}.