LING KAN

Software Engineer

App Developer

Web Developer

User Experience

Email: ling.kan@outlook.com

About

Experienced Technical Specialist with a demonstrated history of working in the consumer electronics industry. Skilled in Human-Computer Interaction (HCI), Innovation Development, Web Technologies and Application Design and Development. Strong information technology professional with a Bachelor's degree focused in Computer Science from University of Lincoln.

Skills

Operating Systems: Windows, Unix, Linux, Android, Mac OS

Languages: HTML, CSS, PHP, Java, JavaScript, C++, C#, C, Visual Basic, Prolog, IO, Lisp, Python

Databases: SQL, NoSQL, MySQL, MS Access, Amazon Web Services

Software: MS Office, Adobe Photoshop, Microsoft Visual Studio, MATLAB, Android Studio

Recent Work

Wordless Picture Book Application





The core aim of the project is to create an application that contains wordless books which will help enhance children’s English comprehension skills. Wordless books can help improve a child’s creativity by creating new stories as well as encountering new objects and images. These elements help create awareness of the environment and reality for the child. Consequently, creating a tablet application that is interactive, assists with the development of a child’s creativity and language skills is crucial as it will encourage parent-child engagement. This is beneficial as it can be incorporated into the curriculum as a learning tool within the classroom or at home.

The study will detail the process of the design and development of a wordless book application for young children using an iterative, user-centered design process. The application is tested with children between the ages of 3 and 5 years old weekly for a month. Results will be generated by getting parents, guardians and teachers to complete a ‘Oxford Primary English Assessment’ checklist to see whether the child’s language skills have improved.

Website: https://lingkan.co.uk/book1/
Github: https://github.com/lingkn/Wordless-Picture-Book

Grade Received: 1st
Programming Languages used; HTML, CSS, Java

Cross-Platform Application using Phonegap

About The Module

The phrase "write-once, run everywhere” is often used to evangelise the efficiencies of a single, sharable, software codebase that runs unmodified on multiple platforms – this is increasingly important in the mobile device space. Platform dependent, or ‘native’, mobile development is time-consuming and expensive, multiple mobile operating systems and ecosystems are targeted for maximum market outreach, each requiring ‘per platform’ expert knowledge of specific tools, runtime environments and programming languages. In contrast, this module provides students with knowledge on an alternative, and increasingly important, ‘platform agnostic’ approach for mobile development. This approach embraces the use of cross-platform methods by developing applications with a single code base that run efficiently across distinct mobile platforms, with maximum code reuse and interoperability. Students will investigate platform-dependent constraints by critiquing the emergent space of cross-platform tools and frameworks that aim to maximize code sharing between mobile platforms, whilst retaining common like-for-like sensor features such as geolocation, camera, storage and push notification’s without compromising performance or overall user experience. Contemporary cross-platform tools will be adopted throughout the module for the creation of applications that bridge multiple mobile platforms.

Syllabus Outline

i) Theory
• Design guidelines for mobile cross-platform development
• Hi-fidelity prototyping and user experience
• Critiquing the role of mobile-first design
• Evaluation of mobile interactions

ii) Development
• Design and development of cross-platform mobile applications
• Web and Hybrid application scenarios
• Connectivity and Web Services

Learning Outcomes 

LO1 Critically assess the implications and constraints of native mobile development in comparison to platform agnostic approache
LO2 Design, prototype, and evaluate mobile applications using hi-fidelity approaches, based on well-developed user scenarios 
LO3 Develop cross-platform mobile applications utilizing industry standard tools and technologies

Assessment

Brief

For this assessment, you will need to design and develop a mobile app and report on the iterative process of designing and developing it. Your app must revolve around a “mobile moment”. For something to qualify as a mobile moment, it should not be possible / desirable to do the same activity with a laptop / desktop computer. Appropriate mobile moments could be (but are not limited to): commuting, waking up, being in a new city, hiking, sitting in a lecture, … This requirement has two main components: design and development.
 Design 
The app should showcase a thorough understanding of mobile design considerations (e.g. user interaction, user experience) and follow an iterative development approach. You must demonstrate a clear path from requirements analysis (with the use of personas and user stories) going through to the five planes of user experience, and resulting in an iterative prototyping process. The design process should be deliberate and reflective with each step following logically from the previous.
Development 
The app needs to be developed with cross-platform tools presented in the lectures/workshops. It is important that the app goes beyond simplistic HTML pages but instead makes use of more advanced features that are a meaningful response to the mobile moment. Advanced features could include nontrivial inclusion of 3rd party APIs or appropriate use of phone hardware (such as sensors).

Submission 






Functionality:
- Getting users location using cordova-plugin-geolocation
- Able to get users address by using Geocoding API - In the demo an error message is shown due to quota maxed out
- Creating a travel note, by getting users location, date & time, note, and a photo. This then saves into an SQL database
- Error handling in the database using bootstraps
- Search a Youtube video based on users location using a YouTube API
- Find nearby tourist attractions using Places API

Github: https://github.com/lingkn/Cross-Platform-Development

Grade Received: 1st
Programming Languages used; HTML5, CSS, Java

Parallel Computing - OpenCL

About The Module

Parallel Computing is a very important, modern paradigm in Computer Science, which is a promising direction for keeping up with the expected exponential growth in the discipline. Executing multiple processes at the same time can tremendously increase the computational throughput not only benefitting scientific computations but also leading to new exciting applications like real-time animated 3d graphics, video processing, physics simulation, etc. The relevance of parallel computing is especially prominent due to availability of modern, affordable computer hardware utilising multi-core and/or large number of massively parallel units.

The module will cover the fundamentals of parallel and distributed computing with focus on communication and coordination among the processes, performance and scalability. Different parallel architectures will be considered and compared including multi-core processors, Graphics Processing Units (GPUs), Field Programmable Gate Array (FPGAs) etc. Optimisation issues related to development of parallel algorithms will be also discussed. A special focus will be devoted to fundamental parallel algorithms including reduction, scan, sort, etc. The content will be illustrated by many practical examples from computer graphics, computer vision, physics simulation, etc. Practical tasks will involve programming of Graphics Processing Units (GPUs) using CUDA parallel programming platform.

Syllabus Outline

The module will cover the fundamentals of parallel and distributed computing with focus on communication and coordination among the processes, performance and scalability. Different parallel architectures will be considered and compared including multi-core processors, Graphics Processing Units (GPUs), Field Programmable Gate Array (FPGAs) etc. Optimisation issues related to development of parallel algorithms will be also discussed. A special focus will be devoted to fundamental parallel algorithms including reduction, scan, sort, etc. The content will be illustrated by many practical examples from computer graphics, computer vision, physics simulation, etc. Practical tasks will involve programming of Graphics Processing Units (GPUs) using CUDA parallel programming platform.

Learning Outcomes 

LO1 demonstrate practical skills in applying parallel algorithms for solving computational problems
LO2 critique the theoretical knowledge underpinning parallel computation
LO3 analyse parallel architectures as a means to provide solutions to complex computational problems

Assessment 

Brief

Your task is to develop a simple statistical tool for analysing historical weather records from Lincolnshire. The provided data files include records of air temperature collected over a period of more than 80 years from five weather stations in Lincolnshire: Barkston Heath, Scampton, Waddington, Cranwell and Coningsby. Your tool should be able to load the provided dataset and perform statistical summaries of temperature including the min, max and average values, and standard deviation. The provided summaries should be performed on the entire dataset regardless their acquisition time and location. For additional credit, you can also consider the median statistic and its 1 st and 3rd quartiles (i.e. 25th and 75th percentiles) which will require a development of a suitable sorting algorithm. Due to the large amount of data (i.e. 1.8 million records), all statistical calculations shall be performed on parallel hardware and implemented by a parallel software component written in OpenCL. Your tool should also report memory transfer, kernel execution and total program execution times for performance assessment. Further credit will be given for additional optimisation strategies which target the parallel performance of the tool. In such a case, your program should run and display execution times for different variants of your algorithm. Your basic implementation can assume temperature values expressed as integers and skip all parts after a decimal point. For additional credit, you should also consider the exact temperature values and their corresponding statistics. You can base your code on the material provided during workshop sessions, but you are not allowed to use any existing parallel libraries (e.g. Boost.Compute). To help you with code development, a shorter dataset is also provided which is 100 times smaller. The original file is called “weather_lincolnshire.txt” and the short dataset is “weather_lincolnshire_short.txt”. More details about the file format are included in the “readme.txt” file. The data files are provided on Blackboard together with this description document in a file “temp_lincolnshire_datasets.zip”. The output results and performance measures should be reported in a console window in a clear and readable format. All reading and displaying operations should be provided by the host code.

Submission 

Weather Project


Github: https://github.com/lingkn/Parallel-Computing

Grade Received: 1st
Programming Languages: OpenCL, C



Travel Diary Prototype

 Travel Diary enables users to get the location of every photo that is taken. This is useful for bloggers and travel enthusiasts, as they are able to find the location of where every photo is taken.

Users are then able to share there travel stories with friends and family, and find inspiration from other people of the locations they have been to.



The following prototype is made from Balsamiq.

Autonomous Mobile Robotics

About The Module

The module introduces the main concepts of Autonomous Mobile Robotics, providing an understanding of the range of processing components required to build physically embodied robotic systems, from basic control architectures to spatial navigation in real-world environments. Students will be introduced to relevant theoretical concepts around robotic sensing and control in the lectures, together with a practical “hands on” approach to robot programming in the workshops.

Syllabus Outline

The module will introduce fundamental concepts in mobile robotics, with a particular focus on sensing and control for autonomous navigation in dynamic environments. An indicative list of topics is given as follows:

• Introduction to autonomous mobile robotics
• Robot programming
• Robot vision and sensing
• Navigation
• Control architectures
• Motion and control
• Robot behaviours
• Obstacle avoidance
• Robotic mapping and self-localisation
• Robotic systems


Learning Outcomes 

LO1 critically assess the theoretical capabilities of autonomous mobile robots 
LO2 understand and critically evaluate the range of possible applications for mobile robotic systems 
LO3 implement and empirically evaluate intelligent control strategies, by programming autonomous mobile robots to perform complex tasks in dynamic environments

Assessment 

Brief

Your first task (relating to Criterion 1 “Group Robot Tasks” in the CRG, 30% of the assessment item one mark) consists of continuous engagement with a total of four workshop tasks you work on as a group of 3-4 students, demonstrated successfully on a real Turtlebot robot and in simulation.

Your second task (relating to Criterion 2 and 3 in the CRG, total of 70% of the mark for assessment item one) is to develop an object search behaviour, programmed in Python using ROS, that enables a robot to search for coloured objects visible in the robot’s camera. This assessment is purely done in simulation, and not on the real robot. As part of this task, you must submit an implementation and a presentation.
 Implementation 
Your task is to implement a behaviour that enables the robot in simulation to find a total of 4 objects distributed in a simulated environment. You need to utilise the robot’s sensory input and its actuators to guide the robot to each item. Success in locating an item is defined as: (a) being less than 1m from the item, and (b) indication from the robot that it has found an object. For the development and demonstration of your software component, you will be provided with a simulation environment (called “Gazebo”). The required software is installed on all machines in the Labs. The simulated environment includes four brightly coloured objects hidden in the environment at increasing difficulty. Your robot starts from a predefined position. You will be provided with a “training arena” in simulation (a simulation of an indoor environment in which 4 objects will be “hidden”). This “training arena” will resemble the “test arena” in terms of structure and complexity (same floor plan of the environment), but the positions of the objects will slightly vary to assess the generality of your approach. You may choose any sensors available on the robot to drive your search behaviour. However, your system design should include the following elements:
1. Perception of the robot’s environment using the Kinect sensor, either in RGB or Depth space, or using a combination of both RGB and Depth data in order find the object;
2. An implementation of an appropriate control law implementing a search behaviour on the robot. You may choose to realise this as a simple reactive behaviour or a more complex one, eg, utilising a previously acquired map of the environment;
3. Motor control of the (simulated) Turtlebot robot using the implemented control law.

The minimum required functionality consists of a simple reactive behaviour, allowing in principle to find objects. For an average mark the behaviour should be able to successfully find some objects at unknown locations. Further extensions are possible to improve your mark in this assessment, also to enable you to find all objects. Possible extensions to the system may include (but are not limited to):
● An enhanced perception system – in-built colour appearance learning, use of additional visual cues (e.g. edges), combination of RGB and Depth features, etc.;
● Exploiting maps and other structural features in the environment or more clever search strategies.
● Utilising other existing ROS components that are available (like localisation, mapping, etc)
The software component must be implemented in Python and be supported by use of ROS to communicate with the robot. The code should be well-commented and clearly structured into functional blocks. The program must run on computers in Labs B and C. To obtain credit for this assignment you will need to demonstrate the various components of your software to the module instructors and be ready to answer questions related to the development of the solution – please follow carefully the instructions given in the lectures on the requirements for the demonstration and see below for presentation requirements

Submission 

For this module,  I had to develop an object search behaviour, programmed in Python using ROS, that enables a robot to search for coloured objects visible in the robot’s camera.

The task was to implement a behaviour that enables the robot in simulation to find a total of 4 objects distributed in a simulated environment. By utilising the robot’s sensory input and its actuators to guide the robot to each item. 

The simulated environment includes four brightly coloured objects hidden in the environment at increasing difficulty, and the robot starts from a predefined position.

The following presentation images shows the concept of what I have implemented, and what I can improve on for the future.


















Github: https://github.com/lingkn/Autonomous-Mobile-Robotics

Grade Received: 2:1
Programming Languages Used: Python (Using ROS)
System Used: Linux

Image Processing MATLAB

About The Module

Digital image processing techniques are used in a wide variety of application areas such as computer vision, robotics, remote sensing, industrial inspection, medical imaging, etc. It is the study of any algorithms that take image as an input and returns useful information as output. This module aims to provide a broad introduction to the field of image processing, culminating in a practical understanding of how to apply and combine techniques to various image-related applications. The students will be able to extract useful data from the raw image and interpret the image data. The techniques will be implemented using the mathematical programming language Matlab or OpenCV.

Syllabus Outline

The contents of the module covers the fundamentals of digital image processing: spatial processing and filtering, colour image processing, morphological image processing, image segmentation, image representation and description, introduction to pattern classification. Practical programming in MATLAB. This module develops the following mathematical concepts and techniques: set theory, probability theory, gradients and derivatives, vectors and matrices, linear algebra, applied Bayesian estimation, non-linear filtering.

Learning Outcomes 

LO1 critique the theoretical knowledge of image processing, including how to process and extract quantifiable information from images
LO2 apply a range of imaging techniques to solve practical problems

Assessment 

Task 1 – Interpolation 

Complete the MATLAB script to load the image 'Zebra.jpg' and convert it to grey - scale. Then resize the image from its original size of 556 × 612 to an enlarged size of 1668 × 1836 by interpolation. Implement b oth ne arest neighbour and bilinear interpolation. Display both re - s ized images in your report . Also add at least one close - up (zoomed - in section) to you report where the difference between the two interpolation techniques is clear.  
For this task , you CANNOT use the MATLAB built - in functions ‘ imresize ’ and ‘ interp2 ’ . However, you CAN use any other built - in function, if necessary.

Task 2 – Point Processing 

Complete the MATLAB script to load the image ‘SC .png’ and apply the following Piecewise - Linear transformation function to the image. Assume the diagram is drawn according to scale. This transformation h ighlights range [A , B], but preserves all other grey levels (identity). You can use the fo llowing values : A =80 , B =100, and C =220 . For this task, you CAN use any MATLAB built - in function. Add figures of the original and transformed images to your report.


Task3 – Neighbourhood Processing 

Complete the MATLAB script to load the image ‘ Noisy.pn g' and convert it to grey - scale. Then implement smoothing filter s using averaging and median filters with a kernel (mask) size of 5 ( neighbourhoo d of 5 × 5 ). Use zero - padding to deal with pixels on the edges of the image . For this task , you CANNOT us e the MATLAB built - in functions ‘ fspecial ’ , ‘ imfilter ’ , ‘ conv2 ’, ‘ medfilt2 ’ and ‘ filter2 ’ . However, you CAN use any other built - in function, if necessary. 


Task4 – Object Recognition

Complete the MATLAB script to load the image ‘Starfish.jpg’ and , through a series of image processing techniques you choose, generate a binary image where zero means no s tarfish detected and a non - zero value means that the pixel belongs to a starfish as shown in the figure below. For this task, you CAN use any built - in function. 


To be able to reduce the complexity of the image, I converted the image to greyscale in order to prepare for boundary tracing later on.

Before doing this, I had to consider which spatial filtering method to use to remove the noise from the photo. The first option is averaging filter which is easy and simple to implement, which is mainly used to remove Gaussian noise. This reduces the intensity between one pixel and removes noise and also blurs. Another is median filter, which considers each pixel and looks at is neighbours and is better at preserving more detail and removing salt and pepper noise.

I determined that the specific noise within the image is salt and pepper noise as it contains small black and white pixels when converted to greyscale. It was also important to keep small details within the image as the starfish had a very specific and defined shape. Therefore, the median filter was most suited for this image.
Once the noise has been removed from the image, it then had to be enhanced and sharpened to provide a clearer view and outline of each object and detail within the image. The reason for this is so the stars can be more defined, allowing for segmentation.

Thereafter, I used edge detection to find the object boundaries within the image. One of the methods I considered is the Sobel and Canny method. However, “the image resulting from edge detection cannot be used as a segmentation result”. (Morav, 2009)



An example of sobel and canny method for edge detection on the image


Therefore using image segmentation methods such as colour-based segmentation, thresholding methods and transform methods would be more preferable. As I will be able to identify objects implement the segmentation. To do this I considered 'activecontour' as it segments the image 'into foreground and background' (Uk.mathworks.com, 2017) which is mainly used for image segmentation and boundary tracking. This will binarize the image by converting each pixel to binary . However, this process has an iterative approach and because of this it takes a long time to calculate.
Another option is using 'im2bw' this converts the greyscale image to a binary image uses a fixed threshold value of 0.5. Which then means 'graythresh' will also be needed to determine the threshold value.
Consequently, I decided to use 'imbinarize' as it uses automatically computed the threshold. This uses the Otsu's method. This “ is used to perform histogram shape-based image thresholding automatically, or, the reduction of a gray level image to a binary image. The image to be threshold is considered as image containing two classes of pixels (e.g. foreground and background), then the optimum threshold separating those two classes, which lies in the range [0, 1]”. (Ramanathan et al., 2009)


Once this is complete I have inverted the image therefore I am able to use the morphology function. The morphology function will remove objects and pixels that are not needed. Therefore, using the function 'strel' and 'imclose' function it will enable us to create a disk-shaped structuring element which then fills the gaps. Apart from this, I have implemented the 'bwareopen' function to remove any objects that are less than 50 pixels, therefore removing any small unneeded objects.


Once all the smaller objects have been removed, the starfish's have to be located within the image using an estimate of the objects (star's) area and perimeter based on the metric to find the roundness of an object. Using 'regionprops' also allows us to estimate the area of all the objects. Lastly once the stars has been made it will be exported as a new image, therefore leaving five star shaped objects.



Grade Received: 1st
Programming Languages used: C
Programs Used: MATLAB