InMemoryDataset

class paddle.fluid.dataset.InMemoryDataset[source]

InMemoryDataset, it will load data into memory and shuffle data before training. This class should be created by DatasetFactory

Example

dataset = paddle.fluid.DatasetFactory().create_dataset(“InMemoryDataset”)

set_queue_num(queue_num)

Set Dataset output queue num, training threads get data from queues

Parameters

queue_num (int) – dataset output queue num

Examples

import paddle.fluid as fluid
dataset = fluid.DatasetFactory().create_dataset("InMemoryDataset")
dataset.set_queue_num(12)
set_parse_ins_id(parse_ins_id)

Set id Dataset need to parse insid

Parameters

parse_ins_id (bool) – if parse ins_id or not

Examples

import paddle.fluid as fluid
dataset = fluid.DatasetFactory().create_dataset("InMemoryDataset")
dataset.set_parse_ins_id(True)
set_parse_content(parse_content)

Set if Dataset need to parse content

Parameters

parse_content (bool) – if parse content or not

Examples

import paddle.fluid as fluid
dataset = fluid.DatasetFactory().create_dataset("InMemoryDataset")
dataset.set_parse_content(True)
set_fleet_send_batch_size(fleet_send_batch_size=1024)

Set fleet send batch size, default is 1024

Parameters

fleet_send_batch_size (int) – fleet send batch size

Examples

import paddle.fluid as fluid
dataset = fluid.DatasetFactory().create_dataset("InMemoryDataset")
dataset.set_fleet_send_batch_size(800)
set_fleet_send_sleep_seconds(fleet_send_sleep_seconds=0)

Set fleet send sleep time, default is 0

Parameters

fleet_send_sleep_seconds (int) – fleet send sleep time

Examples

import paddle.fluid as fluid
dataset = fluid.DatasetFactory().create_dataset("InMemoryDataset")
dataset.set_fleet_send_sleep_seconds(2)
set_merge_by_lineid(var_list, erase_duplicate_feas=True, min_merge_size=2, keep_unmerged_ins=True)

Set merge by line id, instances of same line id will be merged after shuffle, you should parse line id in data generator.

Parameters
  • var_list (list) – slots that can be merge. each element in var_list is Variable. some slots such as show and click, we usually don’t merge them for same line id, so user should specify which slot can be merged.

  • erase_duplicate_feas (bool) – whether erase duplicate feasigns when merge. default is True.

  • min_merge_size (int) – minimal size to merge. default is 2.

  • keep_unmerged_ins (bool) – whether to keep unmerged ins, such as ins with unique id or the num of ins with same id is less than min_merge_size.

Examples

import paddle.fluid as fluid
dataset = fluid.DatasetFactory().create_dataset("InMemoryDataset")
dataset.set_merge_by_lineid()
load_into_memory()

Load data into memory

Examples

import paddle.fluid as fluid
dataset = fluid.DatasetFactory().create_dataset("InMemoryDataset")
filelist = ["a.txt", "b.txt"]
dataset.set_filelist(filelist)
dataset.load_into_memory()
preload_into_memory(thread_num=None)

Load data into memory in async mode

Parameters

thread_num (int) – preload thread num

Examples

import paddle.fluid as fluid
dataset = fluid.DatasetFactory().create_dataset("InMemoryDataset")
filelist = ["a.txt", "b.txt"]
dataset.set_filelist(filelist)
dataset.preload_into_memory()
dataset.wait_preload_done()
wait_preload_done()

Wait preload_into_memory done

Examples

import paddle.fluid as fluid
dataset = fluid.DatasetFactory().create_dataset("InMemoryDataset")
filelist = ["a.txt", "b.txt"]
dataset.set_filelist(filelist)
dataset.preload_into_memory()
dataset.wait_preload_done()
local_shuffle()

Local shuffle

Examples

import paddle.fluid as fluid
dataset = fluid.DatasetFactory().create_dataset("InMemoryDataset")
filelist = ["a.txt", "b.txt"]
dataset.set_filelist(filelist)
dataset.load_into_memory()
dataset.local_shuffle()
global_shuffle(fleet=None, thread_num=12)

Global shuffle. Global shuffle can be used only in distributed mode. i.e. multiple processes on single machine or multiple machines training together. If you run in distributed mode, you should pass fleet instead of None.

Examples

import paddle.fluid as fluid
from paddle.fluid.incubate.fleet.parameter_server.pslib import fleet
dataset = fluid.DatasetFactory().create_dataset("InMemoryDataset")
filelist = ["a.txt", "b.txt"]
dataset.set_filelist(filelist)
dataset.load_into_memory()
dataset.global_shuffle(fleet)
Parameters
  • fleet (Fleet) – fleet singleton. Default None.

  • thread_num (int) – shuffle thread num. Default is 12.

release_memory()

Release InMemoryDataset memory data, when data will not be used again.

Examples

import paddle.fluid as fluid
from paddle.fluid.incubate.fleet.parameter_server.pslib import fleet
dataset = fluid.DatasetFactory().create_dataset("InMemoryDataset")
filelist = ["a.txt", "b.txt"]
dataset.set_filelist(filelist)
dataset.load_into_memory()
dataset.global_shuffle(fleet)
exe = fluid.Executor(fluid.CPUPlace())
exe.run(fluid.default_startup_program())
exe.train_from_dataset(fluid.default_main_program(), dataset)
dataset.release_memory()
get_memory_data_size(fleet=None)

Get memory data size, user can call this function to know the num of ins in all workers after load into memory.

Note

This function may cause bad performance, because it has barrier

Parameters

fleet (Fleet) – Fleet Object.

Returns

The size of memory data.

Examples

import paddle.fluid as fluid
from paddle.fluid.incubate.fleet.parameter_server.pslib import fleet
dataset = fluid.DatasetFactory().create_dataset("InMemoryDataset")
filelist = ["a.txt", "b.txt"]
dataset.set_filelist(filelist)
dataset.load_into_memory()
print dataset.get_memory_data_size(fleet)
get_shuffle_data_size(fleet=None)

Get shuffle data size, user can call this function to know the num of ins in all workers after local/global shuffle.

Note

This function may cause bad performance to local shuffle, because it has barrier. It does not affect global shuffle.

Parameters

fleet (Fleet) – Fleet Object.

Returns

The size of shuffle data.

Examples

import paddle.fluid as fluid
from paddle.fluid.incubate.fleet.parameter_server.pslib import fleet
dataset = fluid.DatasetFactory().create_dataset("InMemoryDataset")
filelist = ["a.txt", "b.txt"]
dataset.set_filelist(filelist)
dataset.load_into_memory()
dataset.global_shuffle(fleet)
print dataset.get_shuffle_data_size(fleet)
desc()

Returns a protobuf message for this DataFeedDesc

Examples

import paddle.fluid as fluid
dataset = fluid.DatasetFactory().create_dataset()
print(dataset.desc())
Returns

A string message

set_batch_size(batch_size)

Set batch size. Will be effective during training

Examples

import paddle.fluid as fluid
dataset = fluid.DatasetFactory().create_dataset()
dataset.set_batch_size(128)
Parameters

batch_size (int) – batch size

set_fea_eval(record_candidate_size, fea_eval=True)

set fea eval mode for slots shuffle to debug the importance level of slots(features), fea_eval need to be set True for slots shuffle.

Parameters
  • record_candidate_size (int) – size of instances candidate to shuffle one slot

  • fea_eval (bool) – wheather enable fea eval mode to enable slots shuffle. default is True.

Examples


import paddle.fluid as fluid dataset = fluid.DatasetFactory().create_dataset(“InMemoryDataset”) dataset.set_fea_eval(1000000, True)

set_filelist(filelist)

Set file list in current worker.

Examples

import paddle.fluid as fluid
dataset = fluid.DatasetFactory().create_dataset()
dataset.set_filelist(['a.txt', 'b.txt'])
Parameters

filelist (list) – file list

set_hdfs_config(fs_name, fs_ugi)

Set hdfs config: fs name ad ugi

Examples

import paddle.fluid as fluid
dataset = fluid.DatasetFactory().create_dataset()
dataset.set_hdfs_config("my_fs_name", "my_fs_ugi")
Parameters
  • fs_name (str) – fs name

  • fs_ugi (str) – fs ugi

set_pipe_command(pipe_command)

Set pipe command of current dataset A pipe command is a UNIX pipeline command that can be used only

Examples

import paddle.fluid as fluid
dataset = fluid.DatasetFactory().create_dataset()
dataset.set_pipe_command("python my_script.py")
Parameters

pipe_command (str) – pipe command

set_thread(thread_num)

Set thread num, it is the num of readers.

Examples

import paddle.fluid as fluid
dataset = fluid.DatasetFactory().create_dataset()
 dataset.set_thread(12)
Parameters

thread_num (int) – thread num

set_use_var(var_list)

Set Variables which you will use.

Examples

import paddle.fluid as fluid
dataset = fluid.DatasetFactory().create_dataset()
dataset.set_use_var([data, label])
Parameters

var_list (list) – variable list

slots_shuffle(slots)

Slots Shuffle Slots Shuffle is a shuffle method in slots level, which is usually used in sparse feature with large scale of instances. To compare the metric, i.e. auc while doing slots shuffle on one or several slots with baseline to evaluate the importance level of slots(features).

Parameters

slots (list[string]) – the set of slots(string) to do slots shuffle.

Examples

import paddle.fluid as fluid dataset = fluid.DatasetFactory().create_dataset(“InMemoryDataset”) dataset.set_merge_by_lineid() #suppose there is a slot 0 dataset.slots_shuffle([‘0’])