-
Notifications
You must be signed in to change notification settings - Fork 5.7k
Improve the initializer Interface for fc, sequence_conv and conv2d layers #5760
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -15,6 +15,37 @@ def unique_name(prefix): | |
return "_".join([prefix, str(uid)]) | ||
|
||
|
||
def convert_np_dtype_to_dtype_(np_dtype): | ||
dtype = np.dtype(np_dtype) | ||
if dtype == np.float32: | ||
return core.DataType.FP32 | ||
elif dtype == np.float64: | ||
return core.DataType.FP64 | ||
elif dtype == np.float16: | ||
return core.DataType.FP16 | ||
elif dtype == np.int32: | ||
return core.DataType.INT32 | ||
elif dtype == np.int16: | ||
return core.DataType.INT16 | ||
elif dtype == np.int64: | ||
return core.DataType.INT64 | ||
elif dtype == np.bool: | ||
return core.DataType.BOOL | ||
else: | ||
raise ValueError("Not supported numpy dtype " + str(dtype)) | ||
|
||
|
||
def dtype_is_floating(dtype): | ||
if not isinstance(dtype, core.DataType): | ||
dtype = convert_np_dtype_to_dtype_(dtype) | ||
|
||
if (dtype == core.DataType.FP16 or dtype == core.DataType.FP16 or | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Same as above. |
||
dtype == core.DataType.FP64): | ||
return True | ||
else: | ||
return False | ||
|
||
|
||
def _debug_string_(proto, throw_on_error=True): | ||
error_fields = list() | ||
if not proto.IsInitialized(error_fields) and throw_on_error: | ||
|
@@ -66,7 +97,7 @@ def __init__(self, | |
"matched.".format(self.name, old_shape, shape)) | ||
if dtype is not None: | ||
if not isinstance(dtype, core.DataType): | ||
dtype = Variable._convert_np_dtype_to_dtype_(dtype) | ||
dtype = convert_np_dtype_to_dtype_(dtype) | ||
if is_new_var: | ||
self.desc.set_data_type(dtype) | ||
else: | ||
|
@@ -148,26 +179,6 @@ def _unique_var_name_(): | |
uid = core.unique_integer(prefix) # unique during whole process. | ||
return "_".join([prefix, str(uid)]) | ||
|
||
@staticmethod | ||
def _convert_np_dtype_to_dtype_(np_dtype): | ||
dtype = np.dtype(np_dtype) | ||
if dtype == np.float32: | ||
return core.DataType.FP32 | ||
elif dtype == np.float64: | ||
return core.DataType.FP64 | ||
elif dtype == np.float16: | ||
return core.DataType.FP16 | ||
elif dtype == np.int32: | ||
return core.DataType.INT32 | ||
elif dtype == np.int16: | ||
return core.DataType.INT16 | ||
elif dtype == np.int64: | ||
return core.DataType.INT64 | ||
elif dtype == np.bool: | ||
return core.DataType.BOOL | ||
else: | ||
raise ValueError("Not supported numpy dtype " + str(dtype)) | ||
|
||
|
||
def get_all_op_protos(): | ||
""" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why make type convert as a global function? I think the staticmethod is more proper here because we can not call type convert function out of Variable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason this was made a global function is because this is needed outside of the
Variable
class. In layer_helper, we want to make sure that every parameter which has not been supplied an initializer has a default initializer. This initializer depends on thedtype
of the parameter. If the parameter is of type float, thenXavierInitializer
is used otherwise the parameter is initialized with Zeros for int and bool types.Now we need this method outside because users can also pass np datatypes as dtypes. The initializer needs to be specified in layer_helper and hence we need to check whether the supplied datatype (which could be np.datatype or core.DataType) is of type float. Do you have any suggestion on how to accomplish this without making this global?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find a solution resolve this problem.
convert_np_dtype_to_dtype_
goes the wrong way...this function just makes user can configure a data type string
float32
,float64
. But we should only let user configure support datatype likepaddle.float32
,paddle.float64
, and make the real type conversion(from/to numpy) happens in the feed/fetch implementation.