spring-boot-transaction

Transaction anotation

1
2
3
4
5
At a high level, when a class declares @Transactional on itself or its members, Spring creates a proxy that implements the same interface(s) as the class you’re annotating. In other words, Spring wraps the bean in the proxy and the bean itself has no knowledge of it. A proxy provides a way for Spring to inject behaviors before, after, or around method calls into the object being proxied.

Internally, its the same as using a transaction advice (using AOP), where a proxy is created first and is invoked before/after the target bean’s method.

The generated proxy object is supplied with a TransactionInterceptor, which is created by Spring. So when the @Transactional method is called from client code, the TransactionInterceptor gets invoked first from the proxy object, which begins the transaction and eventually invokes the method on the target bean. When the invocation finishes, the TransactionInterceptor commits/rolls back the transaction accordingly.

Understanding Propagate and readOnly annotation attributes

Transaction propagation is REQUIRED by default, which means that the same transaction will propagate from a transactional caller to transactional callee. It will create a new transaction or reuse the one if available. For example, if a read-only transaction calls a read-write transaction method, the whole transaction will be read-only.

Depending on the transaction propagation attribute (like for REQUIRES_NEW), sometimes the existing transaction is suspended/paused at some point, a new one is always started and eventually committed, and after that the first transaction is resumed.

Isolation Level

Read Uncommitted – Allows dirty reads, when a transaction is not yet committed by a thread and another thread is reading the dirty data.
Read Committed – Does not allow dirty reads. Only lets a thread to read values which have already been committed by other running transactions in another threads.
Repeatable Read – If the same data is read twice in the same transaction, it will always be the same. This level guarantees that any data once read cannot change.
Serializable – Transactions occur with locking at all levels (read, range and write locking), because of which they are executed in a fixed sequence. It doesn’t allow concurrent transactions and leads to a performance hit.

ms-interview-alg

最新2019/12月面筋

no.1

在code里面找错,两个人,给两个数组分别表示他们喜欢的数,X, Y,找到一个highestIndex表示拥有他们两人喜欢的数的最多的。
类似max network。
【1, 2, 3, 4】, 【2, 3, 4, 3】
算法: 建立一个map,遍历保存每个index为key,value为

No.2

给一个string只有两个字母会出现,找最少次数砍掉这两个字母还可以保持它们的alphabetaical order

No。3

给一串字母,有重复,算出砍掉最少的字母让这段字母不会有三个同样的字母连续。
采用cnt方法计算连续的字符

golang tutorial

  1. go object pass by value, so it won’t affect the original object

    1
    2
    3
    4
    5
    6
    7
    8
      func UpdateEmployee(empDetails Employee){
    empDetails.Name = "Anshul";
    }

    var newEmp = Employee{ Name: : "Mayank", Age: 40, Destination: "Developer"}

    UpdateEmployee(newEmp);
    fmt.println(newEmp)
    1. if you want pass go object by reference, you need to use reference syntax, * and &
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
       func UpdateEmployee(empDetail *Employee){
      empDetail.Name = "Anshul";
      }
      var newEmployee = Employee{Name: "fdfdsf", Age: 40, Destination: "fdfdsfdsf"};

      UpdateEmployee(&newEmployee);
      //& is used to extract the reference of a object
      // * is used to attacked the reference of an object.

      //the original object sent to the function updated

      fmt.Println(newEmployee.Name);

    if using the new keyword to create a new object, it will returns the address of the object. that is reference.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
     type Employee struct {
    Name string
    Age int
    Designation string
    Salary int
    }

    var newEmployee = new(Employee)

    fmt.Println(newEmployee.Name)

*1 But one thing need to point out is that we’re using the new keyword, we can’t provide the default values to the object properties. When the object is created, Golang provides the default values to the object properties. In the above case, the access to the Name property will return an empty string (“”).

So Any changes made to the input object parameter will be reflected in the original object.

1
2
3
4
5
6
7
8
9
10
11
12
func UpdateEmployee(empDetials *Employee) {
empDetails.Name = "Anshul";
}

var newEmployee = new(Employee)

newEmployee.Name = "Mayank"
newEmployee.Age = 30

UpdateEmployee(newEmployee)

fmt.Println(newEmployee.Name)

In the above case, the object reference is returned from the new keyword. Therefore, while invoking the function, we don’t need to use & to send the reference of the object.

*2 Adding a Function to the Struct
The struct not only defines the properties associated with the object but also represents the behavior of the object. We can add functions to the struct that can add behavior to it. The code to associate a function to the struct is pretty different in Golang.

1
2
3
4
5
6
7
8
9
10
type Employee struct {
Name string
Age int
Designation string
Salary int
}

func (emp Employee) ShowDetails() {
fmt.Println("User Name: ", emp.Name)
}

In the above code, we’re adding a new function that’s bound to the Employee struct. We need to explicitly bind the function to the struct. This function defined can then take the reference of the object created using emp.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
type Employee struct {
Name string
Age int
Designation string
Salary int
}

func (emp Employee) ShowDetails() {
fmt.Println("User Name: ", emp.Name)
}

var newEmployee = new(Employee)
newEmployee.Name = "Mayank"

newEmployee.ShowDetails()

Cherry Pickup

  • 741.Cherry Pickup *
  1. DP problem, but greedy solution is track mistake. only lead to a local optimul solution. Here is a counter example:

    1
    2
    3
    4
    5
    grid = [[1,1,1,0,1],
    [0,0,0,0,0],
    [0,0,0,0,0],
    [0,0,0,0,0],
    [1,0,1,1,1]].
  2. the right way is to setup two round trip DP, since the back trip will depand on the first one, we minimize the variables to 3, k, j p. No bull shit, let’s get started!
    We have to maintain two points for round trips.
    (i, j) and (p, q). This will have three constraits:

  3. i < p && j > q
  4. i == p && j == q
  5. i > p && j < q
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
public int cherryPickup(int[][] grid) {
int N = grid.length, M = (N << 1) - 1;
int[][] dp = new int[N][N];
dp[0][0] = grid[0][0];

for (int n = 1; n < M; n++) {
for (int i = N - 1; i >= 0; i--) {
for (int p = N - 1; p >= 0; p--) {
int j = n - i, q = n - p;

if (j < 0 || j >= N || q < 0 || q >= N || grid[i][j] < 0 || grid[p][q] < 0) {
dp[i][p] = -1;
continue;
}

if (i > 0) dp[i][p] = Math.max(dp[i][p], dp[i - 1][p]);
if (p > 0) dp[i][p] = Math.max(dp[i][p], dp[i][p - 1]);
if (i > 0 && p > 0) dp[i][p] = Math.max(dp[i][p], dp[i - 1][p - 1]);

if (dp[i][p] >= 0) dp[i][p] += grid[i][j] + (i != p ? grid[p][q] : 0)
}
}
}

return Math.max(dp[N - 1][N - 1], 0);
}

System design for instagram

Use cases and function:

  1. user can upload/download/view image and video.
  2. user can search image/video based on title.
  3. user can follow others.
  4. can generate new feeds for user with the users they follow(kinds of timeline).

Non-functional requirements

  1. Higher availability
  2. For new Feeds, system can tolerate 200ms latency.
  3. Lower requirement for consistency, user can not view new feeds for a moment which is a little bit acceptable.
  4. High reliability, which the new post can never be lost.

Top will not covered:

  1. tagging image.
  2. search image/video by tags.
  3. describe image/video.
  4. recommend followed users.
  5. feddback and advices.

3th consideration

  1. read-heavy system, so make sure the system can get the image or video quickly.
  2. high mount of image, so management the image/video storage efficiently can be really a reason for our focus.
  3. latency that users view image or video should meet the requirement for that.
  4. data should be highly 100% reliability, if user upload a image, it should never be lost if not deleted by creater.

4th

Storage estimation

  1. active users 1 million per day, totally 100 million users total.
  2. 2 million iamge/video per day uploaded.
  3. avg image 200 KB
  4. 2000000 * 200 KB = 400GB
  5. 400G * 365 ~= 142.5 TB

5th

So we have two senarios:

  1. upload iamge, 2. search image

    we need Object-storage Database (another topic about OSD we might need to cover)

    we might need some RDBS database to store metadata

    6th. data model

    user table: for info, upload iamge, follow users
    phone model: for save info for image/video, we need to index PhotoId and CreationDate, because we want to get the most recent image.

    1
    2
    3
    4
    5
    6
    7
    8
    9
     Photo  {
    UserId int
    PhonePath: varChar(256)
    PhoneLatitude: int
    PhoneLongitude: int
    userLatitude: int
    userLongitude: int
    CreationDate: datatime
    }
{
1
2
3
4
5
6
		Name: varChar(20)
Email: varChar(32)
DateofBirth: datatime
creationdate: datetime
LastLogin: datetime
}
1
2
3
4
UserFollow {
userId1: int
useId2: int
}
  1. Obviously, database schema need join manipulation, so we need to use RDB, like MS SQL/PostgreSQL/MY SQL, but when scaling sysem, we might have trouble for that.
    So, we use noSQL database, and we might need to store image to HDFS or S3 kinds of distributed devices.

  2. we can use databases save in key-value distributed storage. and all the iamge info store in one table. the key could be photoID, value can be location, userLocation, creationtimeStamp and so on.

UserPhoto: user and image relation, key is userId, value: […list of phoneIds]
useFollowTable: same as userPhone, key: userId, value: […lists of userIds]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
```
User: 假设每个“int”和“dateTime”是4 Bytes,那么User表中的每行是68 Bytes:
UserID (4 bytes) + Name (20 bytes) + Email (32 bytes) + DateOfBirth (4 bytes) + CreationDate (4 bytes) + LastLogin (4 bytes) = 68 bytes
如果我们有5亿用户的话,那么我们需要32GB的空间。
500 million * 68 ~= 32GB

Photo: Photo表的每行是284 Bytes:
PhotoID (4 bytes) + UserID (4 bytes) + PhotoPath (256 bytes) + PhotoLatitude (4 bytes) + PhotLongitude(4 bytes) + UserLatitude (4 bytes) + UserLongitude (4 bytes) + CreationDate (4 bytes) = 284 bytes
如果用户每天上传2百万新照片的话,那么Photo表每天增加0.5GB:
2M * 284 bytes ~= 0.5GB per day
所以,Photo表10年共需要1.88 TB存储空间。

UserFollow: UserFollow表中的每一行是8 Bytes。如果我们有5亿用户并且平均每个用户关注500个其他用户的话,UserFollow表将需要895 GB的空间。
500 million users * 500 followers * 8 bytes ~= 1.82TB

10年内这些表需要的全部存储空间大概是3.7 TB:
32GB + 1.88TB + 1.82TB ~= 3.7TB


### 8. Component Design ###

照片上传(写操作)可能比较慢,因为上传操作会写磁盘;而读操作会比较快,特别是在有缓存的情况下。

我们知道web服务器有连接数限制,所以我们设计系统的时候需要非常注意。因为上传操作速度比较慢,所以用户的上传操作可能会消耗掉所有可用的连接数。如果系统忙于应付写操作的话,那么“读”请求会无法得到处理。如果web服务器在任意时间点能处理最大500个连接的话,这就意味着它无法同时处理多于500个并发的“上传”或者“读”请求。为了处理这种极端情况,我们可以用把处理“读”和“写”请求的服务分开。我们可以用不同的服务器分别处理“读”和“写”请求,这样就不会阻塞住系统。

把“读”和“写”请求分开处理还可以让我们独立地扩展和优化这些服务。

### 9.可靠性和冗余(single point failure) ###

本系统的设计目标是不能丢失文件。所以,对于每个用户上传的文件我们需要保存多份拷贝;这样如果一个存储服务器失效了,那么我们仍然能从其他存储服务器上获取另一份拷贝。

这个原则也适用于系统的其他组件。如果系统要有高可用性,那么系统中运行的服务需要有多份拷贝。这样,如果一个服务死掉的话,系统依然是可用的并且仍然可以提供服务。所以,冗余可以消除系统的单点故障。

如果服务在任何时间点只需要运行一个实例的话,那么我们可以再运行该服务一个冗余的实例,但是该冗余实例不处理任何请求。一旦主服务出了问题,这个冗余的实例可以立刻通过“故障转移”(Failover)接管过来。

在系统中创建冗余可以移除系统的单点故障,并且提供功能的备份以防万一。假设在生产环境中有同一个服务的两个实例;如果其中一个失效或者降级了,那么系统可以通过“故障转移”(Failover)让另一个实例接管。“故障转移”(Failover)可以自动触发也可以手动触发。

10. Database Sharding

Below we will discuss metaData data sharding design methodology.

a. 基于UserID分区 如果我们基于UserID分片,那么我们可以将同一个用户的全部照片元数据保存于同一个分片上。如果一个数据库分片是1 TB,那么我们需要4个分片来保存3.7 TB的数据。另外,为了更好的性能和扩展性,假设我们保留10个分片。
所以我们可以通过UserID对10取余来找到对应的ShardID,从而保存数据。为了唯一的识别系统中的任意照片,我们可以在PhotoID后面追加ShardID。
那么我们如何生成PhotoID?每个数据库分片都有各自的auto-increment sequence用以生成PhotoID,并且由于我们会追加ShardID到每个PhotoID,因此我们能保证每个PhotoID在系统中都是唯一的。
这种数据库分片设计会有什么问题?
我们如何处理热点用户?有的人会关注热点用户,并且人们往往会查看热点用户上传的每一张照片。
有些用户相对于其他用户会上传大量照片,这会导致存储的不均匀分布。
如果我们无法将一个用户的全部照片的元数据保存于一个分片的话,那我们就只得把用户照片的元数据存放于不同的分片上,这会不会导致更高的延迟?
另外,把一个用户的全部照片的元数据存储于同一个分片上可能会导致一个问题是,如果保存元数据的分片失效了或者该分片的负载很高导致延时很大的话,那么该用户的全部数据就都不可用了。

b. 基于PhotoID分区 我们先生成唯一的PhotoID,然后再通过PhotoID对10取余找到ShardID,这就能解决以上问题。我们不必追加ShardID到PhotoID,因为PhotoID本身在系统中就是唯一的。
那么我们如何生成PhotoID呢?在每个分片中设置auto-incrementing sequence毫无意义,因为我们是先生成PhotoID,再通过PhotoID找到分片。解决方法是,我们可以运行一个独立的数据库实例以生成自动增长的PhotoID。假设我们的PhotoID是64位,那么我们可以定义一个只有64位ID列的表。如果一张新的照片被上传到系统,我们可以插入一行到这个表中并且取这个ID为新的照片的PhotoID。
那么这个生成PhotoID的数据库实例会不会有单点失败?是的,有可能。解决方法是,我们可以定义两个数据库实例,一个生成偶数ID,另一个生成奇数ID。
我们可以添加一个负载均衡器用来轮流调度这两个数据库实例。这两个数据库实例可能生成的ID数目不一样,但是这对系统不会造成任何问题。我们可以把这个设计思想用于系统的其他表比如Users表、Photo-Comments表或者其他组件上。

如果将来数据大幅增长怎么办?我们可以在物理数据库服务器中创建多个逻辑分区来应对未来的数据增长。由于每个物理数据库服务器可以运行多个数据库实例,所以我们可以为各个逻辑分区运行各自独立的数据库实例。如果我们发现一个数据库服务器数据大量增长,就可以把该服务器的一些逻辑分区迁移到另一个服务器。我们可以维护一个配置文件(或者一个数据库)来把我们的逻辑分区映射到数据库服务器;这样我们就可以容易地移动分区。任何时候只要我们想移动一个分区,我们只需要升级这个配置文件以声明改动就行。

11.“推送新闻(News Feed)”

一个用户的“推送新闻(News Feed)”是由该用户关注的所有用户的最近的、最受欢迎的照片组成的。

生成“推送新闻(News Feed)”的基本流程如下。假设用户的“推送新闻(News Feed)”需要包含100张照片。那么我们的应用服务器首先要获取一张该用户关注的所有用户的列表,并且获取每个用户的最近的100张照片的元数据。然后,应用服务器会提交这些照片的元数据到我们的排名算法,该算法会基于照片的时间和用户的喜好来选择前100张照片并且返回给用户。这个流程的问题在于延时会比较高,因为我们需要查询多个表并基于查询结果做排序/合并/排名。为了提高效率,我们可以预先生成“推送新闻(News Feed)”并且将结果保存于另一个表。

预生成“推送新闻(News Feed)”:我们可以用一台专用服务器不断地生成用户的“推送新闻(News Feed)”,并且把结果保存于一个UserNewsFeed表中。这样,一旦需要返回“推送新闻(News Feed)”给用户的时候,我们只要查询一下这张表并返回结果就行了。

一旦这些服务器需要为一个用户生成“推送新闻(News Feed)”,它们可以查询UserNewsFeed表以找到该用户最新的“推送新闻(News Feed)”。然后,我们基于该时间点之后的数据按照以上步骤生成新的“推送新闻(News Feed)”。

有哪些不同的方法发送“推送新闻(News Feed)”给用户?

  1. Pull: 客户端可以一定频率或者手动方式直接向服务器请求“推送新闻(News Feed)”。这种方式的问题在于,a)只有客户端发送了请求,客户端的“推送新闻(News Feed)”才会更新;b)如果服务器端没有数据更新的话,那么大多数客户端请求只会得到空响应。

  2. Push: 一旦服务器端的数据更新了,服务器就会推送“推送新闻(News Feed)”到客户端。客户端需要维护一个Long Poll请求以接收数据更新。这种方式的问题在于,如果一个用户被大量的其他用户关注或者一个明星用户拥有数百万粉丝的话,服务器端的推送压力会很大。
    因为要把new feeds强行推送给所有的关注者,服务器会爆炸。

  3. 复合(Hybrid): 复合(Hybrid)方式结合了Pull和Push的优点。对于有大量粉丝的用户,他们可以用Pull方式获取“推送新闻(News Feed)”;对于只有少量粉丝的用户,我们采用Push方式推送更新。另一种复合(Hybrid)方式是,服务器端以一定频率推送“推送新闻(News Feed)”给所有用户;对于有很多粉丝的用户,他们将采用一定频率的Pull模式获取更新。

12.如何基于分片数据生成”推送新闻(News Feed)”

为指定的用户生成“推送新闻(News Feed)”的最重要的要求之一是基于该用户关注的所有用户的列表获取最新的照片。为此我们需要一种有效的方法来按照照片的创建时间为照片排序。我们可以把照片的创建时间追加为PhotoID的一部分。因为我们对PhotoID创建了索引,所以我们能很快基于索引找到最新的PhotoID。
在此我们可以使用时间戳。假设我们的PhotoID由两部分构成:一是时间戳;二是auto-incrementing sequence。为创建PhotoID,我们可以使用当前时间戳并从我们在第10节中提到的键值生成数据库中获取一个auto-incrementing ID。然后,我们可以基于PhotoID算出ShardID,也就是PhotoID对10取余;我们可以把照片的元数据存放于该分片。

那么一个PhotoID需要多少存储字节?假设我们的时间戳从今天开始,那么我们需要多少字节来存储一个单位为秒的50年后的时间戳呢?
86400(秒) 365(天) 50(年) => 1,600,000,000
我们需要31个比特位来存储这个数字。平均而言,用户每秒会上传23张照片;我们可以分配9个比特位来存放auto incremented sequence。所以,每秒钟我们能存储至多512张照片,也是2的9次方。另外,我们每秒钟可以重置一次auto incremented sequence。
13.缓存与负载均衡

我们的服务需要一个海量照片发送系统以服务分布于全球的用户。我们的服务需要大量的分布于世界各地的照片缓存服务器和CDN,这样用户就可以从就近的服务器获取推送内容。

我们可以引入缓存来存放元数据库中的热点数据。在此我们可以使用Memcache,应用服务器在访问数据库之前可以先检查缓存中是否有需要的数据。LRU对我们的系统来说是一种可行的缓存策略。基于此策略,最旧的数据会被最先丢弃。

我们如何建立更有效的缓存机制呢?假设我们使用2-8原则,那么每天有20%的照片会产生80%的流量,这就意味着一部分照片是非常受欢迎的,大部分人都会访问这些照片。因此我们可以考虑缓存每天的20%的照片和其元数据。

译后记:

1)第10节写得不好,不知道作者到底希不希望所有数据在一个分片上
2)下面是我搜集的文中的一些概念的链接:
文件存储与对象存储的区别
https://www.zhihu.com/question/21536660
sql与nosql的区别
https://www.jianshu.com/p/b32fe4fe45a3
列式存储
https://blog.csdn.net/dc_726/article/details/41143175
数据分片
http://blog.zhaojie.me/2010/03/sharding-by-id-characteristic.html

benefit from spring data

  1. Benefits of Using spring data

Reduces boilerplate code

Example: traditional JDBM connnector

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
public Room getRoomJDBC() throws Exception{
Statement statement = null;
Connection connection = null;
try {
class.forName('org.h2.Driver');
connection = DriverManager.getConnection("jdbc:h2:~/test", "sa", "");
String sql = "SELECT * from ROOM where ROOM_NUMBER = 'p1'"
ResultSet resultSet = statement.executeQuery(sql);
while (resultSet.next()) {
Room room = new Room();
room.setId(resultSet.getLong("ROOM_ID"));
room.setBedInfo(resultSet.getString("BED_INFO"));
room.setName(resultSet.getString('ROOM_NAME'));
room.setNumber(resultSet.getString("ROOM_NUMBER"));
return room;
}
}catch (SQLException sqle){
return null;
}finally{
if(statement!=null){
statement.close();
}
if(null != connenction){
connection.close();
}
}
return null;
}

Instead, with Spring Data, it could be just one line of code:

1
2
3
public Room getRoomSpringData(){
return this.repository.findByNumber("PI");
}

reduce a lots of code and time, and reduce the posibility of making error when

writing the 20 lines of code.

Providing ability to swap out datasources much easier

Always focus on the business logic, not on the data access code.

Key Component

Entity

js-asyc-promise-reading-summary

nested promise

  1. Request code:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    const makeRequest = async() => {
    const response = await fetch(url);
    if(response.needAnotherRequest){
    const secondResponse = await makeAnotherRequest(reponse);
    console.log(secondResponse);
    return secondResponse;
    }
    console.log(response);
    return response;
    }
  2. sync a list post Request:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    let promise = Promise.resolve();
    let posts = [{}, {}, {}];

    posts.forEach(post => {
    //construct a new promise and replace the old one
    //promise return the resove callback
    promise = promise.then(()=>{
    return db.insert(post);
    });
    });

    promise.then(()=>{
    ///all you post request has been make one by one
    });

if you want to make it concurrently post, it will looks like:

1
2
3
posts.forEach(post=>{
promise = promise.then(db.insert(post));
})

// this might not be correct.

sequential vs parallism

promise.all and yield all in generator as well as generate a list of promise and promise all of them immedieately

leetcode 927 Three Equal Parts

Leetcode 927. Three Equal Parts

– count how many one (if(num%3!=0) return[-1,-1])
– search from right side to left, until we found num/3 1s, this index is not final answer, but it define the pattern of 1s;
– from left, ignore 0s, and then match the pattern found in step 2, to get the first Endindex
– do another matching to found second EndIndex.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
public int[] threeEqualParts(int[] A){
int numOne = 0;
for(int i: A){
if(i==1) numOne++;
}
int[] noRes = { -1, -1 };
if(numOne == 0) return new int[]{0, 2};
if(numOne%3!=0) return noRes;
int idxThird = 0;
int temp = 0;
for(int i = A.length-1; i >= 0; i--){
if(A[i]==1){
temp++;
if(temp == numOne /3){
idxThird = i;
break;
}
}
}
int res1 = findEindIdx(A, 0, idxThird);
if(res1 < 0) return noRes;
int res2 = findEndIdx(A, res1+1, idxThird);
if(res2<0) return noRes;
return new int[]{res1, res2+1};
}

private int findEndIdx(int[] A, int left, int right){
while(A[left]==0) left++;
while(right < A.length){
if(A[left] != A[right]) return -1;
left++;
right++;
}
return left-1;
}

Leetcode 124 Binary Tree Maximum Path Sum

Leetcode 124. Binary Tree Maximum Path Sum

  • Requirement:
    Get the maximum path with max sum, and the path should be with a highest tree node.

    1
    2   3
  • DFS Logic: Design a function with returning the max left path, return like ~ max(left, right) + node.val, and
    during the process, we have check the max( res, left + node.val + right);
    the code will be as followed:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
     public class Solution {
    int maxValue; // res
    public int maxPathSum(TreeNode root){
    maxValue = Integer.MIN_VALUE;
    maxPathDown(root);
    return maxValue;
    }
    private int maxPathDown(TreeNode node){
    if(node == null) return 0;
    //get the left max path(only drilling down from the left/right node)
    int left = Math.max(0, maxPathDown(node.left));
    int right = Math.max(0, maxPathDown(node.right));
    maxValue = Math.max(left, right) + node.val;
    }
    }