forked from Theano/Theano
-
Notifications
You must be signed in to change notification settings - Fork 0
Devnews
nouiz edited this page Feb 8, 2013
·
31 revisions
Updates in the Trunk since the last release up to git shortlog -ns rel-0.6rc2..
git log -p rel-0.5... |grep Merge|less
PR merged since 0.6rc2: git log -p rel-0.6rc2... |grep Merge|grep '#' |cut -f 8 -d ' ' | replace "#" "* https://github.com/Theano/Theano/pull/"
#1209 Correctly support array with more then 2*10e32 element in AdvancedSubtensor1. Abalkin. * https://github.com/Theano/Theano/pull/1198
GpuContiguous now check the preallocated outputs strides before using it PL
Take the compiledir lock only for op that generate c_code. Fred
DebugMode.check_preallocated_output now also work on Theano function output Pascal L.
GpuContiguous.grad Ian
Fix path problem in some cases with theano-nose --batch (Abalkin)
Fix compilation problem on GPU on Windows. Fred
Ignore bug before 0.5 by default. Fred
tensor_var.{dot,std,argmin,argmax,argsort,clip,conj,repeat,round,trace,real,imag} to support the NumPy syntax. abalkin
In the grad method, if it were asked to raise an error if there is no path between the variables, we didn't always returned an error. We returned the mathematical right answer 0. Ian
- https://github.com/Theano/Theano/pull/1091 Rami Al-Rfou' Vivek Kulkarni
speed up sparse.AddSD new op sparse.ConstructSparseFromList advanced_subtensor1( TODO, how to specify the sparse_grad?
more detection of infinit loop with global opt Pascal L.
new feature:
- tensor.tensordot now support Rop/Lop (Jeremiah Lowin)
- This remove the class TensorDot and TensorDotGrad. It is the Dot/Elemwise op that are used.
- tensor.dot support n-dimensional inputs as NumPy (Jeremiah Lowin)
- Work on the GPU too.
Scan opt that where skipped with a warning is now applied. Fix grad that was not computable, are now computable Razvan.
fix race condition when determining if g++ is available: Abalkin
-
- https://github.com/Theano/Theano/pull/1173 Fred
- Crash fix tensor.roll(Var, iscalar) reported by Jeremiah Lowin. Memory leak fix on the GPU when allow_gc=False reported by Jonas Gehring
- https://github.com/Theano/Theano/pull/1170
fix GpuSoftmax and GpuSoftmaxWithBias crash on GTX285 Fred accept -ftz=true, --prec-div=false and --prec=sqrt=false opt to nvcc with nvcc.flags. Enable all of them with nvcc.flags=--use_fast_math
fix compilation crash with llvm on mac Abalkin.
get_constant_value -> get_scalar_constant_value and raise an tensor.basic.NotScalarConstantError. error to be more specific. Ian G.
make GpuSum work with bigger shape when summing on the first dim on 3d tensor. Fred, reported Chris Currivan
fix crash due to a race condition when importing theano (Ian G.
windows fix Fred
theano function now always have a field name, default to None. Fred
Better profiling of test time with theano-nose --time-profile
Fix for new blas interface in scipy Olivier D.
Raise an error when theano.shared called with a theano variable
more scan opt Razvan P. TODO: what it is?
Fix openmp detection Pascal L.
add tensor_var.sort() Jeremiah Lowin
Fix copy or random state between graph Guillaume D.
make tensor.take to support NumPy syntax (abalkin)
make the ScanSaveMem applied more frequently. Razvan, reported Abalkin There was a skipped warning printed.
make CrossentropySoftmax1HotWithBiasDx and CrossentropySoftmaxArgmax1HotWithBias support uint* dtype
Fix problem with the broadcast dimensions of the Repeat op. Abalkin
more determinism with including a new class OrderedSet Ian, Olivier D., Pascal L.
fix eigh grad that didn't always returned the right dtype. Fred, Olivier D.
crash fix at compilation Olivier D.
fix wrong dtype in sandbox.linalg.ExtractDiag with shape of 0. reported by abalkin
- https://github.com/Theano/Theano/pull/1095 Ian, Olivier D.
Fixes three non-determinism problems: 1) forbids using dict as the updates argument to theano.compile.function, since this makes the returned function non-deterministic
- fixes an issue where grad is non-deterministic
- the Updates class was not appropriate for representing updates because it is non-deterministic; replaced it with the OrderedUpdates class. This requires changing scan to use the new class.
Also adds some features I found useful for debugging these issues.
Trying to use the Updates class will result in you getting an OrderedUpdates and a warning. Trying to use theano.function(updates=dict) will issue a warning.
- https://github.com/Theano/Theano/pull/1088 1119 abalkin, Fred
TODO: duplicate with theano.sandbox.linalg.ops.* tensor_var.{diagonal,conjugate} theano.tensor.{diag,diagonal}
CudaNdarray_prep_output(CudaNdarray ** arr, int nd, const int * dims) (Ian G)
fgraph.name == fn.name (Ian G)
cross-entropy opt work when specify_shape is used (PL)
c_code for SpecifyShape op(Fred)
debugmode print more info when there is an error (Fred)
crash fix about dimshuffle (abalkin)
doc David, abalkin, Amir Elaguizy, Olivier D.