活动介绍
file-type

函数式编程入门探索

下载需积分: 9 | 6.98MB | 更新于2024-07-20 | 115 浏览量 | 17 下载量 举报 收藏
download 立即下载
"Becoming Functional" 是一本关于函数式编程的书籍,由 Joshua Backfield 撰写,探讨了如何从传统的编程思维转向函数式编程思维。 在计算机科学领域,函数式编程是一种编程范式,它强调通过使用数学函数来构造程序,而不是通过改变状态或共享数据来实现计算。这种编程风格鼓励使用纯函数(即没有副作用的函数),避免了变量的可变性,从而简化了代码,提高了可读性和可维护性。"Becoming Functional" 这本书可能是为了引导开发者理解和掌握这种编程方法,提升他们的编程技巧和解决问题的能力。 书中的内容可能涵盖了以下函数式编程的核心概念和实践: 1. **函数式编程的基本原理**:包括函数作为一等公民、纯函数、不可变数据结构、高阶函数(如 map、filter 和 reduce)等基本概念。 2. **λ演算与函数抽象**:λ演算是函数式编程的理论基础,书中可能会介绍 λ 表达式和函数应用,以及它们如何在实际编程语言中体现。 3. **递归和尾递归**:函数式编程中常用递归解决循环问题,书里可能会讲解如何使用递归以及优化递归的方式,比如尾递归。 4. **类型系统**:函数式编程语言通常具有强类型或静态类型系统,有助于发现潜在错误。书可能探讨了类型推断和类型系统的应用。 5. **Monads**:Monads 是函数式编程中的一个重要概念,常用于处理副作用和控制流。书可能深入解释了 Monads 的工作原理和实际应用。 6. **函数式编程语言的特性**:书中可能涵盖了一些流行函数式编程语言,如 Haskell、Scala、Clojure 或者 Lisp,讨论它们的设计哲学和语法特性。 7. **函数式编程与并行处理**:由于函数式编程中的不可变数据和无副作用,它在多线程和并行计算中表现出色,书可能涉及这一领域的最佳实践。 8. **函数式编程的实际应用**:作者可能通过示例展示如何在实际项目中使用函数式编程,例如在数据处理、Web 开发或算法设计中的应用。 9. **从面向对象到函数式的转变**:对于熟悉面向对象编程的读者,书可能提供了一条逐步过渡到函数式编程的路径,包括解决常见挑战和转换思维模式的方法。 10. **错误处理和调试**:在函数式编程中,错误处理往往通过异常或者返回值来表示,书会介绍这些策略以及如何有效地调试函数式代码。 通过阅读 "Becoming Functional",开发者可以学习如何利用函数式编程的思维方式和工具来提高代码质量,减少错误,并提高开发效率。这本书可能是对任何希望深入理解函数式编程的程序员的宝贵资源。

相关推荐

filetype

NotImplementedError: Cannot convert a symbolic Tensor (lstm_1/strided_slice:0) to a numpy array. --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) Cell In[20], line 51 48 start_time = time.time() 50 # 选择要训练的模型类型(rnn/lstm/gru) ---> 51 train_model(model_type='lstm') 53 # 计算总耗时 54 total_time = time.time() - start_time Cell In[20], line 13, in train_model(model_type) 11 model = build_gru_model() 12 else: # 默认使用LSTM ---> 13 model = build_lstm_model() 15 # 打印模型结构 16 model.summary() Cell In[16], line 2, in build_lstm_model() 1 def build_lstm_model(): ----> 2 model = keras.Sequential([ 3 layers.Embedding(total_words, embedding_len, input_length=max_review_len), 4 layers.LSTM(64, return_sequences=False), 5 layers.Dense(1, activation='sigmoid') 6 ]) 7 return model File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/training/tracking/base.py:457, in no_automatic_dependency_tracking.<locals>._method_wrapper(self, *args, **kwargs) 455 self._self_setattr_tracking = False # pylint: disable=protected-access 456 try: --> 457 result = method(self, *args, **kwargs) 458 finally: 459 self._self_setattr_tracking = previous_value # pylint: disable=protected-access File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/keras/engine/sequential.py:113, in Sequential.__init__(self, layers, name) 111 tf_utils.assert_no_legacy_layers(layers) 112 for layer in layers: --> 113 self.add(layer) File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/training/tracking/base.py:457, in no_automatic_dependency_tracking.<locals>._method_wrapper(self, *args, **kwargs) 455 self._self_setattr_tracking = False # pylint: disable=protected-access 456 try: --> 457 result = method(self, *args, **kwargs) 458 finally: 459 self._self_setattr_tracking = previous_value # pylint: disable=protected-access File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/keras/engine/sequential.py:195, in Sequential.add(self, layer) 190 self.inputs = layer_utils.get_source_inputs(self.outputs[0]) 192 elif self.outputs: 193 # If the model is being built continuously on top of an input layer: 194 # refresh its output. --> 195 output_tensor = layer(self.outputs[0]) 196 if len(nest.flatten(output_tensor)) != 1: 197 raise TypeError('All layers in a Sequential model ' 198 'should have a single output tensor. ' 199 'For multi-output layers, ' 200 'use the functional API.') File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/keras/layers/recurrent.py:623, in RNN.__call__(self, inputs, initial_state, constants, **kwargs) 617 inputs, initial_state, constants = _standardize_args(inputs, 618 initial_state, 619 constants, 620 self._num_constants) 622 if initial_state is None and constants is None: --> 623 return super(RNN, self).__call__(inputs, **kwargs) 625 # If any of `initial_state` or `constants` are specified and are Keras 626 # tensors, then add them to the inputs and temporarily modify the 627 # input_spec to include them. 629 additional_inputs = [] File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/keras/engine/base_layer.py:854, in Layer.__call__(self, inputs, *args, **kwargs) 852 outputs = base_layer_utils.mark_as_return(outputs, acd) 853 else: --> 854 outputs = call_fn(cast_inputs, *args, **kwargs) 856 except errors.OperatorNotAllowedInGraphError as e: 857 raise TypeError('You are attempting to use Python control ' 858 'flow in a layer that was not declared to be ' 859 'dynamic. Pass `dynamic=True` to the class ' 860 'constructor.\nEncountered error:\n"""\n' + 861 str(e) + '\n"""') File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/keras/layers/recurrent.py:2548, in LSTM.call(self, inputs, mask, training, initial_state) 2546 self.cell.reset_dropout_mask() 2547 self.cell.reset_recurrent_dropout_mask() -> 2548 return super(LSTM, self).call( 2549 inputs, mask=mask, training=training, initial_state=initial_state) File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/keras/layers/recurrent.py:681, in RNN.call(self, inputs, mask, training, initial_state, constants) 675 def call(self, 676 inputs, 677 mask=None, 678 training=None, 679 initial_state=None, 680 constants=None): --> 681 inputs, initial_state, constants = self._process_inputs( 682 inputs, initial_state, constants) 684 if mask is not None: 685 # Time step masks must be the same for each input. 686 # TODO(scottzhu): Should we accept multiple different masks? 687 mask = nest.flatten(mask)[0] File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/keras/layers/recurrent.py:798, in RNN._process_inputs(self, inputs, initial_state, constants) 796 initial_state = self.states 797 else: --> 798 initial_state = self.get_initial_state(inputs) 800 if len(initial_state) != len(self.states): 801 raise ValueError('Layer has ' + str(len(self.states)) + 802 ' states but was passed ' + str(len(initial_state)) + 803 ' initial states.') File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/keras/layers/recurrent.py:605, in RNN.get_initial_state(self, inputs) 603 dtype = inputs.dtype 604 if get_initial_state_fn: --> 605 init_state = get_initial_state_fn( 606 inputs=None, batch_size=batch_size, dtype=dtype) 607 else: 608 init_state = _generate_zero_filled_state(batch_size, self.cell.state_size, 609 dtype) File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/keras/layers/recurrent.py:2313, in LSTMCell.get_initial_state(self, inputs, batch_size, dtype) 2312 def get_initial_state(self, inputs=None, batch_size=None, dtype=None): -> 2313 return list(_generate_zero_filled_state_for_cell( 2314 self, inputs, batch_size, dtype)) File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/keras/layers/recurrent.py:2752, in _generate_zero_filled_state_for_cell(cell, inputs, batch_size, dtype) 2750 batch_size = array_ops.shape(inputs)[0] 2751 dtype = inputs.dtype -> 2752 return _generate_zero_filled_state(batch_size, cell.state_size, dtype) File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/keras/layers/recurrent.py:2768, in _generate_zero_filled_state(batch_size_tensor, state_size, dtype) 2765 return array_ops.zeros(init_state_size, dtype=dtype) 2767 if nest.is_sequence(state_size): -> 2768 return nest.map_structure(create_zeros, state_size) 2769 else: 2770 return create_zeros(state_size) File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/util/nest.py:536, in map_structure(func, *structure, **kwargs) 532 flat_structure = [flatten(s, expand_composites) for s in structure] 533 entries = zip(*flat_structure) 535 return pack_sequence_as( --> 536 structure[0], [func(*x) for x in entries], 537 expand_composites=expand_composites) File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/util/nest.py:536, in <listcomp>(.0) 532 flat_structure = [flatten(s, expand_composites) for s in structure] 533 entries = zip(*flat_structure) 535 return pack_sequence_as( --> 536 structure[0], [func(*x) for x in entries], 537 expand_composites=expand_composites) File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/keras/layers/recurrent.py:2765, in _generate_zero_filled_state.<locals>.create_zeros(unnested_state_size) 2763 flat_dims = tensor_shape.as_shape(unnested_state_size).as_list() 2764 init_state_size = [batch_size_tensor] + flat_dims -> 2765 return array_ops.zeros(init_state_size, dtype=dtype) File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/ops/array_ops.py:2338, in zeros(shape, dtype, name) 2334 if not isinstance(shape, ops.Tensor): 2335 try: 2336 # Create a constant if it won't be very big. Otherwise create a fill op 2337 # to prevent serialized GraphDefs from becoming too large. -> 2338 output = _constant_if_small(zero, shape, dtype, name) 2339 if output is not None: 2340 return output File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/ops/array_ops.py:2295, in _constant_if_small(value, shape, dtype, name) 2293 def _constant_if_small(value, shape, dtype, name): 2294 try: -> 2295 if np.prod(shape) < 1000: 2296 return constant(value, shape=shape, dtype=dtype, name=name) 2297 except TypeError: 2298 # Happens when shape is a Tensor, list with Tensor elements, etc. File <__array_function__ internals>:180, in prod(*args, **kwargs) File /opt/conda/lib/python3.8/site-packages/numpy/core/fromnumeric.py:3088, in prod(a, axis, dtype, out, keepdims, initial, where) 2970 @array_function_dispatch(_prod_dispatcher) 2971 def prod(a, axis=None, dtype=None, out=None, keepdims=np._NoValue, 2972 initial=np._NoValue, where=np._NoValue): 2973 """ 2974 Return the product of array elements over a given axis. 2975 (...) 3086 10 3087 """ -> 3088 return _wrapreduction(a, np.multiply, 'prod', axis, dtype, out, 3089 keepdims=keepdims, initial=initial, where=where) File /opt/conda/lib/python3.8/site-packages/numpy/core/fromnumeric.py:86, in _wrapreduction(obj, ufunc, method, axis, dtype, out, **kwargs) 83 else: 84 return reduction(axis=axis, out=out, **passkwargs) ---> 86 return ufunc.reduce(obj, axis, dtype, out, **passkwargs) File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/framework/ops.py:735, in Tensor.__array__(self) 734 def __array__(self): --> 735 raise NotImplementedError("Cannot convert a symbolic Tensor ({}) to a numpy" 736 " array.".format(self.name)) NotImplementedError: Cannot convert a symbolic Tensor (lstm_1/strided_slice:0) to a numpy array. + Code + Markdown ​ + Code + Markdown 4.使用训练好的模型预测文本类型 + Code + Markdown #选做 ​ + Code + Markdown keras关于fit方法中的参数定义如下 def fit(self, x=None, y=None, batch_size=None, epochs=1, verbose=‘auto’, callbacks=None, validation_split=0., validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False): + Code + Markdown