加载中…
正文 字体大小:

基于深度学习算法进行室内定位——识别建筑和楼层

(2016-11-19 21:01:35)
标签:

深度学习

室内定位

wifi指纹

分类: 数据挖掘

Low-effort place recognition with WiFi fingerprints using deep learning

1. 背景介绍

该paper主要介绍采用深度学习算法,基于wifi信号进行室内定位。即,通过给定的数据,预测具体的室内位置。
WiFi fingerprinting is also used for mobile robots, as WiFi signals are usually available indoors and can provide
rough initial position estimate or can be used together with other positioning systems.

2. 方法

基于autoencoder算法进行无监督的特征学习,然后采用基于有监督
的 deep neural network进行楼层分类定位;

WiFi information can be exploited to provide rough, global position estimates, without additional costs of exteroceptive sensors.

3. 模型

深度学习网络结构
place recognition with deep learning,基于深度学习的位置识别

3.1 特征表达学习

基于深度学习算法进行室内定位——识别建筑和楼层

3.2 有监督学习

基于深度学习算法进行室内定位——识别建筑和楼层

4. 实验

数据集:
https://archive.ics.uci.edu/ml/datasets/UJIIndoorLoc
代码:
https://github.com/aqibsaeed/Place-Recognition-using-Autoencoders-and-NN/blob/master/Place recognition with WiFi fingerprints using AE and NN.ipynb


 
import pandas as pd
import numpy as np
import tensorflow as tf
from sklearn.preprocessing import scale
 
# 读取数据集 
dataset = pd.read_csv("trainingData.csv"header=0)
# 标准化特征集 
features = scale(np.asarray(dataset.ix[:, 0:520]))
# 计算目标变量,预测建筑ID和楼层ID 
labels = np.asarray(dataset["BUILDINGID"].map(str) + dataset["FLOOR"].map(str))
# 目标变量 one hot encoding 
labels = np.asarray(pd.get_dummies(labels))
 
# 数据集划分为训练集和验证集 
train_val_split = np.random.rand(len(features)) <</span> 0.70
train_x = features[train_val_split]
train_y = labels[train_val_split]
val_x = features[~train_val_split]
val_y = labels[~train_val_split]
 
# 读取测试数据,构建测试数据集 
test_dataset = pd.read_csv("validationData.csv"header=0)
test_features = scale(np.asarray(test_dataset.ix[:, 0:520]))
test_labels = np.asarray(test_dataset["BUILDINGID"].map(str) + test_dataset["FLOOR"].map(str))
test_labels = np.asarray(pd.get_dummies(test_labels))
 
 
# 初始化权重变量 
def weight_variable(shape):
    initial = tf.truncated_normal(shape, stddev=0.1)
    return tf.Variable(initial)
 
 
# 初始化偏差变量 
def bias_variable(shape):
    initial = tf.constant(0.0shape=shape)
    return tf.Variable(initial)
 
# 定义输入层单元,隐藏层单元 
n_input = 520
n_hidden_1 = 256
n_hidden_2 = 128
n_hidden_3 = 64
 
n_classes = labels.shape[1]
 
learning_rate = 0.01
training_epochs = 50
batch_size = 20
 
total_batches = dataset.shape[0] // batch_size
 
= tf.placeholder(tf.float32, shape=[None, n_input])
= tf.placeholder(tf.float32, [None, n_classes])
 
# --------------------- Encoder Variables --------------- 
 
e_weights_h1 = weight_variable([n_input, n_hidden_1])
e_biases_h1 = bias_variable([n_hidden_1])
 
e_weights_h2 = weight_variable([n_hidden_1, n_hidden_2])
e_biases_h2 = bias_variable([n_hidden_2])
 
e_weights_h3 = weight_variable([n_hidden_2, n_hidden_3])
e_biases_h3 = bias_variable([n_hidden_3])
 
# --------------------- Decoder Variables --------------- 
 
d_weights_h1 = weight_variable([n_hidden_3, n_hidden_2])
d_biases_h1 = bias_variable([n_hidden_2])
 
d_weights_h2 = weight_variable([n_hidden_2, n_hidden_1])
d_biases_h2 = bias_variable([n_hidden_1])
 
d_weights_h3 = weight_variable([n_hidden_1, n_input])
d_biases_h3 = bias_variable([n_input])
 
# --------------------- DNN Variables ------------------ 
 
dnn_weights_h1 = weight_variable([n_hidden_3, n_hidden_2])
dnn_biases_h1 = bias_variable([n_hidden_2])
 
dnn_weights_h2 = weight_variable([n_hidden_2, n_hidden_2])
dnn_biases_h2 = bias_variable([n_hidden_2])
 
dnn_weights_out = weight_variable([n_hidden_2, n_classes])
dnn_biases_out = bias_variable([n_classes])
 
 
def encode(x):
    l1 = tf.nn.tanh(tf.add(tf.matmul(x, e_weights_h1)e_biases_h1))
    l2 = tf.nn.tanh(tf.add(tf.matmul(l1, e_weights_h2)e_biases_h2))
    l3 = tf.nn.tanh(tf.add(tf.matmul(l2, e_weights_h3)e_biases_h3))
    return l3
 
 
def decode(x):
    l1 = tf.nn.tanh(tf.add(tf.matmul(x, d_weights_h1)d_biases_h1))
    l2 = tf.nn.tanh(tf.add(tf.matmul(l1, d_weights_h2)d_biases_h2))
    l3 = tf.nn.tanh(tf.add(tf.matmul(l2, d_weights_h3)d_biases_h3))
    return l3
 
 
def dnn(x):
    l1 = tf.nn.tanh(tf.add(tf.matmul(x, dnn_weights_h1)dnn_biases_h1))
    l2 = tf.nn.tanh(tf.add(tf.matmul(l1, dnn_weights_h2)dnn_biases_h2))
    out = tf.nn.softmax(tf.add(tf.matmul(l2, dnn_weights_out)dnn_biases_out))
    return out
 
 
encoded = encode(X)
decoded = decode(encoded)
y_ = dnn(encoded)
 
us_cost_function = tf.reduce_mean(tf.pow(- decoded, 2))
s_cost_function = -tf.reduce_sum(* tf.log(y_))
us_optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(us_cost_function)
s_optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(s_cost_function)
 
correct_prediction = tf.equal(tf.argmax(y_, 1)tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
 
with tf.Session() as session:
    tf.initialize_all_variables().run()
 
    # ------------ 1. Training Autoencoders Unsupervised Learning ----------- 
    for epoch in range(training_epochs):
        epoch_costs = np.empty(0)
        for in range(total_batches):
            offset = (b * batch_size) % (features.shape[0] - batch_size)
            batch_x = features[offset:(offset + batch_size), :]
            _, = session.run([us_optimizer, us_cost_function]feed_dict={X: batch_x})
            epoch_costs = np.append(epoch_costs, c)
        print "Epoch: "epoch, " Loss: "np.mean(epoch_costs)
    print "Unsupervised pre-training finished..."
 
    # ---------------- 2. Training NN Supervised Learning ------------------ 
    for epoch in range(training_epochs):
        epoch_costs = np.empty(0)
        for in range(total_batches):
            offset = (b * batch_size) % (train_x.shape[0] - batch_size)
            batch_x = train_x[offset:(offset + batch_size), :]
            batch_y = train_y[offset:(offset + batch_size), :]
            _, = session.run([s_optimizer, s_cost_function]feed_dict={X: batch_x, Y: batch_y})
            epoch_costs = np.append(epoch_costs, c)
        print "Epoch: "epoch, " Loss: "np.mean(epoch_costs)" Training Accuracy: "\ 
            session.run(accuracy, feed_dict={X: train_x, Y: train_y})\ 
            "Validation Accuracy:"session.run(accuracy, feed_dict={X: val_x, Y: val_y})
 
    print "Supervised training finished..."
 
    print "\nTesting Accuracy:"session.run(accuracy, feed_dict={X: test_features, Y: test_labels})

运行结果
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:924] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties:
name: Tesla K20m
major: 3 minor: 5 memoryClockRate (GHz) 0.7055
pciBusID 0000:02:00.0
Total memory: 4.69GiB
Free memory: 4.61GiB
W tensorflow/stream_executor/cuda/cuda_driver.cc:572] creating context when one is currently active; existing: 0x7f53dc8ce0f0
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:924] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 1 with properties:
name: Tesla K20m
major: 3 minor: 5 memoryClockRate (GHz) 0.7055
pciBusID 0000:03:00.0
Total memory: 4.69GiB
Free memory: 4.61GiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0 1
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0: Y Y
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 1: Y Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:806] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Tesla K20m, pci bus id: 0000:02:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:806] Creating TensorFlow device (/gpu:1) -> (device: 1, name: Tesla K20m, pci bus id: 0000:03:00.0)
Epoch: 0 Loss: 0.972427429938
Epoch: 1 Loss: 0.918405156897
Epoch: 2 Loss: 0.885032916808
Epoch: 3 Loss: 0.860882415485
Epoch: 4 Loss: 0.842494339843
Epoch: 5 Loss: 0.82808154579
Epoch: 6 Loss: 0.816558667949
Epoch: 7 Loss: 0.807199725978
Epoch: 8 Loss: 0.799487044414
Epoch: 9 Loss: 0.793033128836
Epoch: 10 Loss: 0.787544836825
Epoch: 11 Loss: 0.78280417657
Epoch: 12 Loss: 0.778652542297
Epoch: 13 Loss: 0.774973362744
Epoch: 14 Loss: 0.771681327567
Epoch: 15 Loss: 0.768712920387
Epoch: 16 Loss: 0.766019414736
Epoch: 17 Loss: 0.763562623322
Epoch: 18 Loss: 0.761311914323
Epoch: 19 Loss: 0.759242096205
Epoch: 20 Loss: 0.757332025522
Epoch: 21 Loss: 0.755563649787
Epoch: 22 Loss: 0.753921394592
Epoch: 23 Loss: 0.752391643792
Epoch: 24 Loss: 0.750962468485
Epoch: 25 Loss: 0.749623297807
Epoch: 26 Loss: 0.748364870695
Epoch: 27 Loss: 0.7471789399
Epoch: 28 Loss: 0.746058224958
Epoch: 29 Loss: 0.744996293048
Epoch: 30 Loss: 0.743987435056
Epoch: 31 Loss: 0.743026601368
Epoch: 32 Loss: 0.742109364677
Epoch: 33 Loss: 0.741231803614
Epoch: 34 Loss: 0.740390511664
Epoch: 35 Loss: 0.739582459902
Epoch: 36 Loss: 0.738805004362
Epoch: 37 Loss: 0.738055822653
Epoch: 38 Loss: 0.737332862731
Epoch: 39 Loss: 0.736634357335
Epoch: 40 Loss: 0.735958700087
Epoch: 41 Loss: 0.735304501439
Epoch: 42 Loss: 0.734670523666
Epoch: 43 Loss: 0.734055654277
Epoch: 44 Loss: 0.73345891022
Epoch: 45 Loss: 0.732879392249
Epoch: 46 Loss: 0.732316288514
Epoch: 47 Loss: 0.731768885208
Epoch: 48 Loss: 0.731236489542
Epoch: 49 Loss: 0.730718483602
Unsupervised pre-training finished...
Epoch: 0 Loss: 85.3875749823 Training Accuracy: 0.165538 Validation Accuracy: 0.168866
Epoch: 1 Loss: 94.3885523046 Training Accuracy: 0.169744 Validation Accuracy: 0.16802
Epoch: 2 Loss: 81.3811396994 Training Accuracy: 0.194482 Validation Accuracy: 0.189679
Epoch: 3 Loss: 53.0147602904 Training Accuracy: 0.332217 Validation Accuracy: 0.332487
Epoch: 4 Loss: 42.4159937341 Training Accuracy: 0.406288 Validation Accuracy: 0.408291
Epoch: 5 Loss: 35.4242737145 Training Accuracy: 0.454695 Validation Accuracy: 0.455499
Epoch: 6 Loss: 26.7856155324 Training Accuracy: 0.52898 Validation Accuracy: 0.515229
Epoch: 7 Loss: 22.0922777938 Training Accuracy: 0.56655 Validation Accuracy: 0.567344
Epoch: 8 Loss: 17.6528822034 Training Accuracy: 0.617524 Validation Accuracy: 0.611506
Epoch: 9 Loss: 15.7423095441 Training Accuracy: 0.669851 Validation Accuracy: 0.666498
Epoch: 10 Loss: 15.4491019001 Training Accuracy: 0.656235 Validation Accuracy: 0.649408
Epoch: 11 Loss: 10.3274225694 Training Accuracy: 0.742711 Validation Accuracy: 0.731811
Epoch: 12 Loss: 8.3928582454 Training Accuracy: 0.735724 Validation Accuracy: 0.726904
Epoch: 13 Loss: 6.71765737772 Training Accuracy: 0.753262 Validation Accuracy: 0.741455
Epoch: 14 Loss: 8.12444285148 Training Accuracy: 0.745919 Validation Accuracy: 0.734856
Epoch: 15 Loss: 7.9868746118 Training Accuracy: 0.763456 Validation Accuracy: 0.7511
Epoch: 16 Loss: 6.60324532225 Training Accuracy: 0.812647 Validation Accuracy: 0.796954
Epoch: 17 Loss: 6.574044797 Training Accuracy: 0.787553 Validation Accuracy: 0.77665
Epoch: 18 Loss: 6.03176876292 Training Accuracy: 0.8073 Validation Accuracy: 0.791709
Epoch: 19 Loss: 5.97733659652 Training Accuracy: 0.789763 Validation Accuracy: 0.771405
Epoch: 20 Loss: 5.61177751975 Training Accuracy: 0.836102 Validation Accuracy: 0.811845
Epoch: 21 Loss: 4.42795247603 Training Accuracy: 0.856206 Validation Accuracy: 0.835872
Epoch: 22 Loss: 4.33232538241 Training Accuracy: 0.845513 Validation Accuracy: 0.820982
Epoch: 23 Loss: 4.69724553579 Training Accuracy: 0.866044 Validation Accuracy: 0.847885
Epoch: 24 Loss: 4.42837274317 Training Accuracy: 0.879376 Validation Accuracy: 0.862267
Epoch: 25 Loss: 3.71973289642 Training Accuracy: 0.88458 Validation Accuracy: 0.862098
Epoch: 26 Loss: 3.80662493487 Training Accuracy: 0.863621 Validation Accuracy: 0.843486
Epoch: 27 Loss: 3.23450719336 Training Accuracy: 0.875312 Validation Accuracy: 0.861083
Epoch: 28 Loss: 2.6861975121 Training Accuracy: 0.906253 Validation Accuracy: 0.883418
Epoch: 29 Loss: 2.61421825266 Training Accuracy: 0.837457 Validation Accuracy: 0.818613
Epoch: 30 Loss: 2.70730450875 Training Accuracy: 0.893634 Validation Accuracy: 0.871066
Epoch: 31 Loss: 3.40998988779 Training Accuracy: 0.917588 Validation Accuracy: 0.889171
Epoch: 32 Loss: 2.91868350113 Training Accuracy: 0.908391 Validation Accuracy: 0.879188
Epoch: 33 Loss: 3.58176934827 Training Accuracy: 0.921438 Validation Accuracy: 0.894416
Epoch: 34 Loss: 3.22659611839 Training Accuracy: 0.890355 Validation Accuracy: 0.869036
Epoch: 35 Loss: 2.49433000001 Training Accuracy: 0.907322 Validation Accuracy: 0.881049
Epoch: 36 Loss: 2.41701531158 Training Accuracy: 0.931846 Validation Accuracy: 0.906599
Epoch: 37 Loss: 2.18132389116 Training Accuracy: 0.919228 Validation Accuracy: 0.895939
Epoch: 38 Loss: 2.00751458924 Training Accuracy: 0.944393 Validation Accuracy: 0.915567
Epoch: 39 Loss: 1.88354813821 Training Accuracy: 0.871819 Validation Accuracy: 0.845178
Epoch: 40 Loss: 2.7492925317 Training Accuracy: 0.942611 Validation Accuracy: 0.91709
Epoch: 41 Loss: 2.45516851596 Training Accuracy: 0.931204 Validation Accuracy: 0.903384
Epoch: 42 Loss: 2.39508329148 Training Accuracy: 0.92878 Validation Accuracy: 0.901185
Epoch: 43 Loss: 3.5211457783 Training Accuracy: 0.933414 Validation Accuracy: 0.907614
Epoch: 44 Loss: 2.12327189477 Training Accuracy: 0.951665 Validation Accuracy: 0.923181
Epoch: 45 Loss: 1.60314020851 Training Accuracy: 0.956085 Validation Accuracy: 0.926396
Epoch: 46 Loss: 1.32214506475 Training Accuracy: 0.957653 Validation Accuracy: 0.928257
Epoch: 47 Loss: 1.42987962397 Training Accuracy: 0.953519 Validation Accuracy: 0.924027
Epoch: 48 Loss: 1.66278500997 Training Accuracy: 0.958153 Validation Accuracy: 0.931811
Epoch: 49 Loss: 1.9498345839 Training Accuracy: 0.947388 Validation Accuracy: 0.916921
Supervised training finished...

Testing Accuracy: 0.730873


本文来自:
https://github.com/aqibsaeed/Place-Recognition-using-Autoencoders-and-NN/blob/master/Place recognition with WiFi fingerprints using AE and NN.ipynb

0

阅读 评论 收藏 转载 喜欢 打印举报
已投稿到:
前一篇:python字典排序
  • 评论加载中,请稍候...
发评论

       

    发评论

    以上网友发言只代表其个人观点,不代表新浪网的观点或立场。

    < 前一篇python字典排序
      

    新浪BLOG意见反馈留言板 不良信息反馈 电话:4006900000 提示音后按1键(按当地市话标准计费) 欢迎批评指正

    新浪简介 | About Sina | 广告服务 | 联系我们 | 招聘信息 | 网站律师 | SINA English | 会员注册 | 产品答疑

    新浪公司 版权所有